Sample records for quantify model performance

  1. QUANTIFYING AN UNCERTAIN FUTURE: HYDROLOGIC MODEL PERFORMANCE FOR A SERIES OF REALIZED "/FUTURE" CONDITIONS

    EPA Science Inventory

    A systematic analysis of model performance during simulations based on observed landcover/use change is used to quantify errors associated with simulations of known "future" conditions. Calibrated and uncalibrated assessments of relative change over different lengths of...

  2. Quantifying Ballistic Armor Performance: A Minimally Invasive Approach

    NASA Astrophysics Data System (ADS)

    Holmes, Gale; Kim, Jaehyun; Blair, William; McDonough, Walter; Snyder, Chad

    2006-03-01

    Theoretical and non-dimensional analyses suggest a critical link between the performance of ballistic resistant armor and the fundamental mechanical properties of the polymeric materials that comprise them. Therefore, a test methodology that quantifies these properties without compromising an armored vest that is exposed to the industry standard V-50 ballistic performance test is needed. Currently, there is considerable speculation about the impact that competing degradation mechanisms (e.g., mechanical, humidity, ultraviolet) may have on ballistic resistant armor. We report on the use of a new test methodology that quantifies the mechanical properties of ballistic fibers and how each proposed degradation mechanism may impact a vest's ballistic performance.

  3. Quantifying errors in trace species transport modeling.

    PubMed

    Prather, Michael J; Zhu, Xin; Strahan, Susan E; Steenrod, Stephen D; Rodriguez, Jose M

    2008-12-16

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO(2) using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error.

  4. Quantifying Groundwater Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Hill, M. C.; Poeter, E.; Foglia, L.

    2007-12-01

    Groundwater models are characterized by the (a) processes simulated, (b) boundary conditions, (c) initial conditions, (d) method of solving the equation, (e) parameterization, and (f) parameter values. Models are related to the system of concern using data, some of which form the basis of observations used most directly, through objective functions, to estimate parameter values. Here we consider situations in which parameter values are determined by minimizing an objective function. Other methods of model development are not considered because their ad hoc nature generally prohibits clear quantification of uncertainty. Quantifying prediction uncertainty ideally includes contributions from (a) to (f). The parameter values of (f) tend to be continuous with respect to both the simulated equivalents of the observations and the predictions, while many aspects of (a) through (e) are discrete. This fundamental difference means that there are options for evaluating the uncertainty related to parameter values that generally do not exist for other aspects of a model. While the methods available for (a) to (e) can be used for the parameter values (f), the inferential methods uniquely available for (f) generally are less computationally intensive and often can be used to considerable advantage. However, inferential approaches require calculation of sensitivities. Whether the numerical accuracy and stability of the model solution required for accurate sensitivities is more broadly important to other model uses is an issue that needs to be addressed. Alternative global methods can require 100 or even 1,000 times the number of runs needed by inferential methods, though methods of reducing the number of needed runs are being developed and tested. Here we present three approaches for quantifying model uncertainty and investigate their strengths and weaknesses. (1) Represent more aspects as parameters so that the computationally efficient methods can be broadly applied. This

  5. Modeling and analysis to quantify MSE wall behavior and performance.

    DOT National Transportation Integrated Search

    2009-08-01

    To better understand potential sources of adverse performance of mechanically stabilized earth (MSE) walls, a suite of analytical models was studied using the computer program FLAC, a numerical modeling computer program widely used in geotechnical en...

  6. Quantifying Safety Performance of Driveways on State Highways

    DOT National Transportation Integrated Search

    2012-08-01

    This report documents a research effort to quantify the safety performance of driveways in the State of Oregon. In : particular, this research effort focuses on driveways located adjacent to principal arterial state highways with urban or : rural des...

  7. Quantifying parametric uncertainty in the Rothermel model

    Treesearch

    S. Goodrick

    2008-01-01

    The purpose of the present work is to quantify parametric uncertainty in the Rothermel wildland fire spreadmodel (implemented in software such as fire spread models in the United States. This model consists of a non-linear system of equations that relates environmentalvariables (input parameter groups...

  8. A framework for quantifying net benefits of alternative prognostic models.

    PubMed

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd.

  9. Quantifying risk and benchmarking performance in the adult intensive care unit.

    PubMed

    Higgins, Thomas L

    2007-01-01

    Morbidity, mortality, and length-of-stay outcomes in patients receiving critical care are difficult to interpret unless they are risk-stratified for diagnosis, presenting severity of illness, and other patient characteristics. Acuity adjustment systems for adults include the Acute Physiology And Chronic Health Evaluation (APACHE), the Mortality Probability Model (MPM), and the Simplified Acute Physiology Score (SAPS). All have recently been updated and recalibrated to reflect contemporary results. Specialized scores are also available for patient subpopulations where general acuity scores have drawbacks. Demand for outcomes data is likely to grow with pay-for-performance initiatives as well as for routine clinical, prognostic, administrative, and research applications. It is important for clinicians to understand how these scores are derived and how they are properly applied to quantify patient severity of illness and benchmark intensive care unit performance.

  10. Performance Metrics, Error Modeling, and Uncertainty Quantification

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Nearing, Grey S.; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Tang, Ling

    2016-01-01

    A common set of statistical metrics has been used to summarize the performance of models or measurements-­ the most widely used ones being bias, mean square error, and linear correlation coefficient. They assume linear, additive, Gaussian errors, and they are interdependent, incomplete, and incapable of directly quantifying un­certainty. The authors demonstrate that these metrics can be directly derived from the parameters of the simple linear error model. Since a correct error model captures the full error information, it is argued that the specification of a parametric error model should be an alternative to the metrics-based approach. The error-modeling meth­odology is applicable to both linear and nonlinear errors, while the metrics are only meaningful for linear errors. In addition, the error model expresses the error structure more naturally, and directly quantifies uncertainty. This argument is further explained by highlighting the intrinsic connections between the performance metrics, the error model, and the joint distribution between the data and the reference.

  11. Using multilevel models to quantify heterogeneity in resource selection

    USGS Publications Warehouse

    Wagner, Tyler; Diefenbach, Duane R.; Christensen, Sonja; Norton, Andrew S.

    2011-01-01

    Models of resource selection are being used increasingly to predict or model the effects of management actions rather than simply quantifying habitat selection. Multilevel, or hierarchical, models are an increasingly popular method to analyze animal resource selection because they impose a relatively weak stochastic constraint to model heterogeneity in habitat use and also account for unequal sample sizes among individuals. However, few studies have used multilevel models to model coefficients as a function of predictors that may influence habitat use at different scales or quantify differences in resource selection among groups. We used an example with white-tailed deer (Odocoileus virginianus) to illustrate how to model resource use as a function of distance to road that varies among deer by road density at the home range scale. We found that deer avoidance of roads decreased as road density increased. Also, we used multilevel models with sika deer (Cervus nippon) and white-tailed deer to examine whether resource selection differed between species. We failed to detect differences in resource use between these two species and showed how information-theoretic and graphical measures can be used to assess how resource use may have differed. Multilevel models can improve our understanding of how resource selection varies among individuals and provides an objective, quantifiable approach to assess differences or changes in resource selection.

  12. Developing and testing a global-scale regression model to quantify mean annual streamflow

    NASA Astrophysics Data System (ADS)

    Barbarossa, Valerio; Huijbregts, Mark A. J.; Hendriks, A. Jan; Beusen, Arthur H. W.; Clavreul, Julie; King, Henry; Schipper, Aafke M.

    2017-01-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF based on a dataset unprecedented in size, using observations of discharge and catchment characteristics from 1885 catchments worldwide, measuring between 2 and 106 km2. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area and catchment averaged mean annual precipitation and air temperature, slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error (RMSE) values were lower (0.29-0.38 compared to 0.49-0.57) and the modified index of agreement (d) was higher (0.80-0.83 compared to 0.72-0.75). Our regression model can be applied globally to estimate MAF at any point of the river network, thus providing a feasible alternative to spatially explicit process-based global hydrological models.

  13. Suitability of ANSI standards for quantifying communication satellite system performance

    NASA Technical Reports Server (NTRS)

    Cass, Robert D.

    1988-01-01

    A study on the application of American National Standards X3.102 and X3.141 to various classes of communication satellite systems from the simple analog bent-pipe to NASA's Advanced Communications Technology Satellite (ACTS) is discussed. These standards are proposed as means for quantifying the end-to-end communication system performance of communication satellite systems. An introductory overview of the two standards are given followed by a review of the characteristics, applications, and advantages of using X3.102 and X3.141 to quantify with a description of the application of these standards to ACTS.

  14. The Five Key Questions of Human Performance Modeling.

    PubMed

    Wu, Changxu

    2018-01-01

    Via building computational (typically mathematical and computer simulation) models, human performance modeling (HPM) quantifies, predicts, and maximizes human performance, human-machine system productivity and safety. This paper describes and summarizes the five key questions of human performance modeling: 1) Why we build models of human performance; 2) What the expectations of a good human performance model are; 3) What the procedures and requirements in building and verifying a human performance model are; 4) How we integrate a human performance model with system design; and 5) What the possible future directions of human performance modeling research are. Recent and classic HPM findings are addressed in the five questions to provide new thinking in HPM's motivations, expectations, procedures, system integration and future directions.

  15. Can We Use Regression Modeling to Quantify Mean Annual Streamflow at a Global-Scale?

    NASA Astrophysics Data System (ADS)

    Barbarossa, V.; Huijbregts, M. A. J.; Hendriks, J. A.; Beusen, A.; Clavreul, J.; King, H.; Schipper, A.

    2016-12-01

    Quantifying mean annual flow of rivers (MAF) at ungauged sites is essential for a number of applications, including assessments of global water supply, ecosystem integrity and water footprints. MAF can be quantified with spatially explicit process-based models, which might be overly time-consuming and data-intensive for this purpose, or with empirical regression models that predict MAF based on climate and catchment characteristics. Yet, regression models have mostly been developed at a regional scale and the extent to which they can be extrapolated to other regions is not known. In this study, we developed a global-scale regression model for MAF using observations of discharge and catchment characteristics from 1,885 catchments worldwide, ranging from 2 to 106 km2 in size. In addition, we compared the performance of the regression model with the predictive ability of the spatially explicit global hydrological model PCR-GLOBWB [van Beek et al., 2011] by comparing results from both models to independent measurements. We obtained a regression model explaining 89% of the variance in MAF based on catchment area, mean annual precipitation and air temperature, average slope and elevation. The regression model performed better than PCR-GLOBWB for the prediction of MAF, as root-mean-square error values were lower (0.29 - 0.38 compared to 0.49 - 0.57) and the modified index of agreement was higher (0.80 - 0.83 compared to 0.72 - 0.75). Our regression model can be applied globally at any point of the river network, provided that the input parameters are within the range of values employed in the calibration of the model. The performance is reduced for water scarce regions and further research should focus on improving such an aspect for regression-based global hydrological models.

  16. Quantifying light-dependent circadian disruption in humans and animal models.

    PubMed

    Rea, Mark S; Figueiro, Mariana G

    2014-12-01

    Although circadian disruption is an accepted term, little has been done to develop methods to quantify the degree of disruption or entrainment individual organisms actually exhibit in the field. A variety of behavioral, physiological and hormonal responses vary in amplitude over a 24-h period and the degree to which these circadian rhythms are synchronized to the daily light-dark cycle can be quantified with a technique known as phasor analysis. Several studies have been carried out using phasor analysis in an attempt to measure circadian disruption exhibited by animals and by humans. To perform these studies, species-specific light measurement and light delivery technologies had to be developed based upon a fundamental understanding of circadian phototransduction mechanisms in the different species. When both nocturnal rodents and diurnal humans, experienced different species-specific light-dark shift schedules, they showed, based upon phasor analysis of the light-dark and activity-rest patterns, similar levels of light-dependent circadian disruption. Indeed, both rodents and humans show monotonically increasing and quantitatively similar levels of light-dependent circadian disruption with increasing shift-nights per week. Thus, phasor analysis provides a method for quantifying circadian disruption in the field and in the laboratory as well as a bridge between ecological measurements of circadian entrainment in humans and parametric studies of circadian disruption in animal models, including nocturnal rodents.

  17. The Urban Forest Effects (UFORE) model: quantifying urban forest structure and functions

    Treesearch

    David J. Nowak; Daniel E. Crane

    2000-01-01

    The Urban Forest Effects (UFORE) computer model was developed to help managers and researchers quantify urban forest structure and functions. The model quantifies species composition and diversity, diameter distribution, tree density and health, leaf area, leaf biomass, and other structural characteristics; hourly volatile organic compound emissions (emissions that...

  18. Model averaging techniques for quantifying conceptual model uncertainty.

    PubMed

    Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg

    2010-01-01

    In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.

  19. Quantifying spatial distribution of spurious mixing in ocean models.

    PubMed

    Ilıcak, Mehmet

    2016-12-01

    Numerical mixing is inevitable for ocean models due to tracer advection schemes. Until now, there is no robust way to identify the regions of spurious mixing in ocean models. We propose a new method to compute the spatial distribution of the spurious diapycnic mixing in an ocean model. This new method is an extension of available potential energy density method proposed by Winters and Barkan (2013). We test the new method in lock-exchange and baroclinic eddies test cases. We can quantify the amount and the location of numerical mixing. We find high-shear areas are the main regions which are susceptible to numerical truncation errors. We also test the new method to quantify the numerical mixing in different horizontal momentum closures. We conclude that Smagorinsky viscosity has less numerical mixing than the Leith viscosity using the same non-dimensional constant.

  20. A practical technique for quantifying the performance of acoustic emission systems on plate-like structures.

    PubMed

    Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I

    2009-06-01

    A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.

  1. Quantifying Variations In Multi-parameter Models With The Photon Clean Method (PCM) And Bootstrap Methods

    NASA Astrophysics Data System (ADS)

    Carpenter, Matthew H.; Jernigan, J. G.

    2007-05-01

    We present examples of an analysis progression consisting of a synthesis of the Photon Clean Method (Carpenter, Jernigan, Brown, Beiersdorfer 2007) and bootstrap methods to quantify errors and variations in many-parameter models. The Photon Clean Method (PCM) works well for model spaces with large numbers of parameters proportional to the number of photons, therefore a Monte Carlo paradigm is a natural numerical approach. Consequently, PCM, an "inverse Monte-Carlo" method, requires a new approach for quantifying errors as compared to common analysis methods for fitting models of low dimensionality. This presentation will explore the methodology and presentation of analysis results derived from a variety of public data sets, including observations with XMM-Newton, Chandra, and other NASA missions. Special attention is given to the visualization of both data and models including dynamic interactive presentations. This work was performed under the auspices of the Department of Energy under contract No. W-7405-Eng-48. We thank Peter Beiersdorfer and Greg Brown for their support of this technical portion of a larger program related to science with the LLNL EBIT program.

  2. Quantifying performance on an outdoor agility drill using foot-mounted inertial measurement units.

    PubMed

    Zaferiou, Antonia M; Ojeda, Lauro; Cain, Stephen M; Vitali, Rachel V; Davidson, Steven P; Stirling, Leia; Perkins, Noel C

    2017-01-01

    Running agility is required for many sports and other physical tasks that demand rapid changes in body direction. Quantifying agility skill remains a challenge because measuring rapid changes of direction and quantifying agility skill from those measurements are difficult to do in ways that replicate real task/game play situations. The objectives of this study were to define and to measure agility performance for a (five-cone) agility drill used within a military obstacle course using data harvested from two foot-mounted inertial measurement units (IMUs). Thirty-two recreational athletes ran an agility drill while wearing two IMUs secured to the tops of their athletic shoes. The recorded acceleration and angular rates yield estimates of the trajectories, velocities and accelerations of both feet as well as an estimate of the horizontal velocity of the body mass center. Four agility performance metrics were proposed and studied including: 1) agility drill time, 2) horizontal body speed, 3) foot trajectory turning radius, and 4) tangential body acceleration. Additionally, the average horizontal ground reaction during each footfall was estimated. We hypothesized that shorter agility drill performance time would be observed with small turning radii and large tangential acceleration ranges and body speeds. Kruskal-Wallis and mean rank post-hoc statistical analyses revealed that shorter agility drill performance times were observed with smaller turning radii and larger tangential acceleration ranges and body speeds, as hypothesized. Moreover, measurements revealed the strategies that distinguish high versus low performers. Relative to low performers, high performers used sharper turns, larger changes in body speed (larger tangential acceleration ranges), and shorter duration footfalls that generated larger horizontal ground reactions during the turn phases. Overall, this study advances the use of foot-mounted IMUs to quantify agility performance in contextually

  3. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  4. Cognitive performance modeling based on general systems performance theory.

    PubMed

    Kondraske, George V

    2010-01-01

    General Systems Performance Theory (GSPT) was initially motivated by problems associated with quantifying different aspects of human performance. It has proved to be invaluable for measurement development and understanding quantitative relationships between human subsystem capacities and performance in complex tasks. It is now desired to bring focus to the application of GSPT to modeling of cognitive system performance. Previous studies involving two complex tasks (i.e., driving and performing laparoscopic surgery) and incorporating measures that are clearly related to cognitive performance (information processing speed and short-term memory capacity) were revisited. A GSPT-derived method of task analysis and performance prediction termed Nonlinear Causal Resource Analysis (NCRA) was employed to determine the demand on basic cognitive performance resources required to support different levels of complex task performance. This approach is presented as a means to determine a cognitive workload profile and the subsequent computation of a single number measure of cognitive workload (CW). Computation of CW may be a viable alternative to measuring it. Various possible "more basic" performance resources that contribute to cognitive system performance are discussed. It is concluded from this preliminary exploration that a GSPT-based approach can contribute to defining cognitive performance models that are useful for both individual subjects and specific groups (e.g., military pilots).

  5. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects ofmore » local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects

  6. Quantifying palpation techniques in relation to performance in a clinical prostate exam.

    PubMed

    Wang, Ninghuan; Gerling, Gregory J; Childress, Reba Moyer; Martin, Marcus L

    2010-07-01

    This paper seeks to quantify finger palpation techniques in the prostate clinical exam, determine their relationship with performance in detecting abnormalities, and differentiate the tendencies of nurse practitioner students and resident physicians. One issue with the digital rectal examination (DRE) is that performance in detecting abnormalities varies greatly and agreement between examiners is low. The utilization of particular palpation techniques may be one way to improve clinician ability. Based on past qualitative instruction, this paper algorithmically defines a set of palpation techniques for the DRE, i.e., global finger movement (GFM), local finger movement (LFM), and average intentional finger pressure, and utilizes a custom-built simulator to analyze finger movements in an experiment with two groups: 18 nurse practitioner students and 16 resident physicians. Although technique utilization varied, some elements clearly impacted performance. For example, those utilizing the LFM of vibration were significantly better at detecting abnormalities. Also, the V GFM led to greater success, but finger pressure played a lesser role. Interestingly, while the residents were clearly the superior performers, their techniques differed only subtly from the students. In summary, the quantified palpation techniques appear to account for examination ability at some level, but not entirely for differences between groups.

  7. Quantifying Neonatal Sucking Performance: Promise of New Methods

    PubMed Central

    Capilouto, Gilson J.; Cunningham, Tommy J.; Mullineaux, David R.; Tamilia, Eleonora; Papadelis, Christos; Giannone, Peter J.

    2017-01-01

    Neonatal feeding has been traditionally understudied so guidelines and evidence-based support for common feeding practices are limited. A major contributing factor to the paucity of evidence-based practice in this area has been the lack of simple-to-use, low-cost tools for monitoring sucking performance. We describe new methods for quantifying neonatal sucking performance that hold significant clinical and research promise. We present early results from an ongoing study investigating neonatal sucking as a marker of risk for adverse neurodevelopmental outcomes. We include quantitative measures of sucking performance to better understand how movement variability evolves during skill acquisition. Results showed the coefficient of variation of suck duration was significantly different between preterm neonates at high risk for developmental concerns (HRPT) and preterm neonates at low risk for developmental concerns (LRPT). For HRPT, results indicated the coefficient of variation of suck smoothness increased from initial feeding to discharge and remained significantly greater than healthy full-term newborns (FT) at discharge. There was no significant difference in our measures between FT and LRPT at discharge. Our findings highlight the need to include neonatal sucking assessment as part of routine clinical care in order to capture the relative risk of adverse neurodevelopmental outcomes at discharge. PMID:28324904

  8. Quantifying Neonatal Sucking Performance: Promise of New Methods.

    PubMed

    Capilouto, Gilson J; Cunningham, Tommy J; Mullineaux, David R; Tamilia, Eleonora; Papadelis, Christos; Giannone, Peter J

    2017-04-01

    Neonatal feeding has been traditionally understudied so guidelines and evidence-based support for common feeding practices are limited. A major contributing factor to the paucity of evidence-based practice in this area has been the lack of simple-to-use, low-cost tools for monitoring sucking performance. We describe new methods for quantifying neonatal sucking performance that hold significant clinical and research promise. We present early results from an ongoing study investigating neonatal sucking as a marker of risk for adverse neurodevelopmental outcomes. We include quantitative measures of sucking performance to better understand how movement variability evolves during skill acquisition. Results showed the coefficient of variation of suck duration was significantly different between preterm neonates at high risk for developmental concerns (HRPT) and preterm neonates at low risk for developmental concerns (LRPT). For HRPT, results indicated the coefficient of variation of suck smoothness increased from initial feeding to discharge and remained significantly greater than healthy full-term newborns (FT) at discharge. There was no significant difference in our measures between FT and LRPT at discharge. Our findings highlight the need to include neonatal sucking assessment as part of routine clinical care in order to capture the relative risk of adverse neurodevelopmental outcomes at discharge. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  10. Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties

    NASA Astrophysics Data System (ADS)

    Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro

    2013-12-01

    We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.

  11. A New Metric for Quantifying Performance Impairment on the Psychomotor Vigilance Test

    DTIC Science & Technology

    2012-01-01

    used the coefficient of determination (R2) and the P-values based on Bartelss test of randomness of the residual error to quantify the goodness - of - fit ...we used the goodness - of - fit between each metric and the corresponding individualized two-process model output (Rajaraman et al., 2008, 2009) to assess...individualized two-process model fits for each of the 12 subjects using the five metrics. The P-values are for Bartelss

  12. Quantifying the performance of individual players in a team activity.

    PubMed

    Duch, Jordi; Waitzman, Joshua S; Amaral, Luís A Nunes

    2010-06-16

    Teamwork is a fundamental aspect of many human activities, from business to art and from sports to science. Recent research suggest that team work is of crucial importance to cutting-edge scientific research, but little is known about how teamwork leads to greater creativity. Indeed, for many team activities, it is not even clear how to assign credit to individual team members. Remarkably, at least in the context of sports, there is usually a broad consensus on who are the top performers and on what qualifies as an outstanding performance. In order to determine how individual features can be quantified, and as a test bed for other team-based human activities, we analyze the performance of players in the European Cup 2008 soccer tournament. We develop a network approach that provides a powerful quantification of the contributions of individual players and of overall team performance. We hypothesize that generalizations of our approach could be useful in other contexts where quantification of the contributions of individual team members is important.

  13. Quantifying the causes of differences in tropospheric OH within global models

    NASA Astrophysics Data System (ADS)

    Nicely, Julie M.; Salawitch, Ross J.; Canty, Timothy; Anderson, Daniel C.; Arnold, Steve R.; Chipperfield, Martyn P.; Emmons, Louisa K.; Flemming, Johannes; Huijnen, Vincent; Kinnison, Douglas E.; Lamarque, Jean-François; Mao, Jingqiu; Monks, Sarah A.; Steenrod, Stephen D.; Tilmes, Simone; Turquety, Solene

    2017-02-01

    The hydroxyl radical (OH) is the primary daytime oxidant in the troposphere and provides the main loss mechanism for many pollutants and greenhouse gases, including methane (CH4). Global mean tropospheric OH differs by as much as 80% among various global models, for reasons that are not well understood. We use neural networks (NNs), trained using archived output from eight chemical transport models (CTMs) that participated in the Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, of Climate, Chemistry, Aerosols and Transport Model Intercomparison Project (POLMIP), to quantify the factors responsible for differences in tropospheric OH and resulting CH4 lifetime (τCH4) between these models. Annual average τCH4, for loss by OH only, ranges from 8.0 to 11.6 years for the eight POLMIP CTMs. The factors driving these differences were quantified by inputting 3-D chemical fields from one CTM into the trained NN of another CTM. Across all CTMs, the largest mean differences in τCH4 (ΔτCH4) result from variations in chemical mechanisms (ΔτCH4 = 0.46 years), the photolysis frequency (J) of O3 → O(1D) (0.31 years), local O3 (0.30 years), and CO (0.23 years). The ΔτCH4 due to CTM differences in NOx (NO + NO2) is relatively low (0.17 years), although large regional variation in OH between the CTMs is attributed to NOx. Differences in isoprene and J(NO2) have negligible overall effect on globally averaged tropospheric OH, although the extent of OH variations due to each factor depends on the model being examined. This study demonstrates that NNs can serve as a useful tool for quantifying why tropospheric OH varies between global models, provided that essential chemical fields are archived.

  14. Quantifying the Causes of Differences in Tropospheric OH Within Global Models

    NASA Technical Reports Server (NTRS)

    Nicely, Julie M.; Salawitch, Ross J.; Canty, Timothy; Anderson, Daniel C.; Arnold, Steve R.; Chipperfield, Martyn P.; Emmons, Louisa K.; Flemming, Johannes; Huijnen, Vincent; Kinnison, Douglas E.; hide

    2017-01-01

    The hydroxyl radical (OH) is the primary daytime oxidant in the troposphere and provides the main loss mechanism for many pollutants and greenhouse gases, including methane (CH4). Global mean tropospheric OH differs by as much as 80% among various global models, for reasons that are not well understood. We use neural networks (NNs), trained using archived output from eight chemical transport models (CTMs) that participated in the Polar Study using Aircraft, Remote Sensing, Surface Measurements and Models, of Climate, Chemistry, Aerosols and Transport Model Intercomparison Project (POLMIP), to quantify the factors responsible for differences in tropospheric OH and resulting CH4 lifetime (Tau CH4) between these models. Annual average Tau CH4, for loss by OH only, ranges from 8.0 to 11.6 years for the eight POLMIP CTMs. The factors driving these differences were quantified by inputting 3-D chemical fields from one CTM into the trained NN of another CTM. Across all CTMs, the largest mean differences in Tau CH4 (Delta Tau CH4) result from variations in chemical mechanisms (Delta Tau CH4 = 0.46 years), the photolysis frequency (J) of O3 yields O(D-1) (0.31 years), local O3 (0.30 years), and CO (0.23 years). The Delta Tau CH4 due to CTM differences in NO(x) (NO + NO2) is relatively low (0.17 years), although large regional variation in OH between the CTMs is attributed to NO(x). Differences in isoprene and J(NO2) have negligible overall effect on globally averaged tropospheric OH, although the extent of OH variations due to each factor depends on the model being examined. This study demonstrates that NNs can serve as a useful tool for quantifying why tropospheric OH varies between global models, provided that essential chemical fields are archived.

  15. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  16. Understanding and quantifying foliar temperature acclimation for Earth System Models

    NASA Astrophysics Data System (ADS)

    Smith, N. G.; Dukes, J.

    2015-12-01

    Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly

  17. Good Models Gone Bad: Quantifying and Predicting Parameter-Induced Climate Model Simulation Failures

    NASA Astrophysics Data System (ADS)

    Lucas, D. D.; Klein, R.; Tannahill, J.; Brandon, S.; Covey, C. C.; Domyancic, D.; Ivanova, D. P.

    2012-12-01

    Simulations using IPCC-class climate models are subject to fail or crash for a variety of reasons. Statistical analysis of the failures can yield useful insights to better understand and improve the models. During the course of uncertainty quantification (UQ) ensemble simulations to assess the effects of ocean model parameter uncertainties on climate simulations, we experienced a series of simulation failures of the Parallel Ocean Program (POP2). About 8.5% of our POP2 runs failed for numerical reasons at certain combinations of parameter values. We apply support vector machine (SVM) classification from the fields of pattern recognition and machine learning to quantify and predict the probability of failure as a function of the values of 18 POP2 parameters. The SVM classifiers readily predict POP2 failures in an independent validation ensemble, and are subsequently used to determine the causes of the failures via a global sensitivity analysis. Four parameters related to ocean mixing and viscosity are identified as the major sources of POP2 failures. Our method can be used to improve the robustness of complex scientific models to parameter perturbations and to better steer UQ ensembles. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013 (UCRL LLNL-ABS-569112).

  18. Shoulder-Fired Weapons with High Recoil Energy: Quantifying Injury and Shooting Performance

    DTIC Science & Technology

    2004-05-01

    USARIEM TECHNICAL REPORT T04-05 SHOULDER-FIRED WEAPONS WITH HIGH RECOIL ENERGY: QUANTIFYING INJURY AND SHOOTING PERFORMANCE...ACKNOWLEDGMENTS The authors would like to thank the following individuals for their assistance in preparing this technical report: Robert Mello... myofascial and other musculoskeletal pain is considered abnormal if the anatomical site is 2 kg/cm2 lower relative to a normal control point, such as

  19. Development of task network models of human performance in microgravity

    NASA Technical Reports Server (NTRS)

    Diaz, Manuel F.; Adam, Susan

    1992-01-01

    This paper discusses the utility of task-network modeling for quantifying human performance variability in microgravity. The data are gathered for: (1) improving current methodologies for assessing human performance and workload in the operational space environment; (2) developing tools for assessing alternative system designs; and (3) developing an integrated set of methodologies for the evaluation of performance degradation during extended duration spaceflight. The evaluation entailed an analysis of the Remote Manipulator System payload-grapple task performed on many shuttle missions. Task-network modeling can be used as a tool for assessing and enhancing human performance in man-machine systems, particularly for modeling long-duration manned spaceflight. Task-network modeling can be directed toward improving system efficiency by increasing the understanding of basic capabilities of the human component in the system and the factors that influence these capabilities.

  20. Using channelized Hotelling observers to quantify temporal effects of medical liquid crystal displays on detection performance

    NASA Astrophysics Data System (ADS)

    Platiša, Ljiljana; Goossens, Bart; Vansteenkiste, Ewout; Badano, Aldo; Philips, Wilfried

    2010-02-01

    Clinical practice is rapidly moving in the direction of volumetric imaging. Often, radiologists interpret these images in liquid crystal displays at browsing rates of 30 frames per second or higher. However, recent studies suggest that the slow response of the display can compromise image quality. In order to quantify the temporal effect of medical displays on detection performance, we investigate two designs of a multi-slice channelized Hotelling observer (msCHO) model in the task of detecting a single-slice signal in multi-slice simulated images. The design of msCHO models is inspired by simplifying assumptions about how humans observe while viewing in the stack-browsing mode. For comparison, we consider a standard CHO applied only on the slice where the signal is located, recently used in a similar study. We refer to it as a single-slice CHO (ssCHO). Overall, our results confirm previous findings that the slow response of displays degrades the detection performance of the observers. More specifically, the observed performance range of msCHO designs is higher compared to the ssCHO suggesting that the extent and rate of degradation, though significant, may be less drastic than previously estimated by the ssCHO. Especially, the difference between msCHO and ssCHO is more significant for higher browsing speeds than for slow image sequences or static images. This, together with their design criteria driven by the assumptions about humans, makes the msCHO models promising candidates for further studies aimed at building anthropomorphic observer models for the stack-mode image presentation.

  1. New Approaches to Quantifying Transport Model Error in Atmospheric CO2 Simulations

    NASA Technical Reports Server (NTRS)

    Ott, L.; Pawson, S.; Zhu, Z.; Nielsen, J. E.; Collatz, G. J.; Gregg, W. W.

    2012-01-01

    In recent years, much progress has been made in observing CO2 distributions from space. However, the use of these observations to infer source/sink distributions in inversion studies continues to be complicated by difficulty in quantifying atmospheric transport model errors. We will present results from several different experiments designed to quantify different aspects of transport error using the Goddard Earth Observing System, Version 5 (GEOS-5) Atmospheric General Circulation Model (AGCM). In the first set of experiments, an ensemble of simulations is constructed using perturbations to parameters in the model s moist physics and turbulence parameterizations that control sub-grid scale transport of trace gases. Analysis of the ensemble spread and scales of temporal and spatial variability among the simulations allows insight into how parameterized, small-scale transport processes influence simulated CO2 distributions. In the second set of experiments, atmospheric tracers representing model error are constructed using observation minus analysis statistics from NASA's Modern-Era Retrospective Analysis for Research and Applications (MERRA). The goal of these simulations is to understand how errors in large scale dynamics are distributed, and how they propagate in space and time, affecting trace gas distributions. These simulations will also be compared to results from NASA's Carbon Monitoring System Flux Pilot Project that quantified the impact of uncertainty in satellite constrained CO2 flux estimates on atmospheric mixing ratios to assess the major factors governing uncertainty in global and regional trace gas distributions.

  2. Patient-specific in silico models can quantify primary implant stability in elderly human bone.

    PubMed

    Steiner, Juri A; Hofmann, Urs A T; Christen, Patrik; Favre, Jean M; Ferguson, Stephen J; van Lenthe, G Harry

    2018-03-01

    Secure implant fixation is challenging in osteoporotic bone. Due to the high variability in inter- and intra-patient bone quality, ex vivo mechanical testing of implants in bone is very material- and time-consuming. Alternatively, in silico models could substantially reduce costs and speed up the design of novel implants if they had the capability to capture the intricate bone microstructure. Therefore, the aim of this study was to validate a micro-finite element model of a multi-screw fracture fixation system. Eight human cadaveric humerii were scanned using micro-CT and mechanically tested to quantify bone stiffness. Osteotomy and fracture fixation were performed, followed by mechanical testing to quantify displacements at 12 different locations on the instrumented bone. For each experimental case, a micro-finite element model was created. From the micro-finite element analyses of the intact model, the patient-specific bone tissue modulus was determined such that the simulated apparent stiffness matched the measured stiffness of the intact bone. Similarly, the tissue modulus of a small damage region around each screw was determined for the instrumented bone. For validation, all in silico models were rerun using averaged material properties, resulting in an average coefficient of determination of 0.89 ± 0.04 with a slope of 0.93 ± 0.19 and a mean absolute error of 43 ± 10 μm when correlating in silico marker displacements with the ex vivo test. In conclusion, we validated a patient-specific computer model of an entire organ bone-implant system at the tissue-level at high resolution with excellent overall accuracy. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:954-962, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  3. A Generalizable Methodology for Quantifying User Satisfaction

    NASA Astrophysics Data System (ADS)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  4. Quantified Gamow shell model interaction for p s d -shell nuclei

    NASA Astrophysics Data System (ADS)

    Jaganathen, Y.; Betan, R. M. Id; Michel, N.; Nazarewicz, W.; Płoszajczak, M.

    2017-11-01

    Background: The structure of weakly bound and unbound nuclei close to particle drip lines is one of the major science drivers of nuclear physics. A comprehensive understanding of these systems goes beyond the traditional configuration interaction approach formulated in the Hilbert space of localized states (nuclear shell model) and requires an open quantum system description. The complex-energy Gamow shell model (GSM) provides such a framework as it is capable of describing resonant and nonresonant many-body states on equal footing. Purpose: To make reliable predictions, quality input is needed that allows for the full uncertainty quantification of theoretical results. In this study, we carry out the optimization of an effective GSM (one-body and two-body) interaction in the p s d f -shell-model space. The resulting interaction is expected to describe nuclei with 5 ≤A ≲12 at the p -s d -shell interface. Method: The one-body potential of the 4He core is modeled by a Woods-Saxon + spin-orbit + Coulomb potential, and the finite-range nucleon-nucleon interaction between the valence nucleons consists of central, spin-orbit, tensor, and Coulomb terms. The GSM is used to compute key fit observables. The χ2 optimization is performed using the Gauss-Newton algorithm augmented by the singular value decomposition technique. The resulting covariance matrix enables quantification of statistical errors within the linear regression approach. Results: The optimized one-body potential reproduces nucleon-4He scattering phase shifts up to an excitation energy of 20 MeV. The two-body interaction built on top of the optimized one-body field is adjusted to the bound and unbound ground-state binding energies and selected excited states of the helium, lithium, and beryllium isotopes up to A =9 . A very good agreement with experimental results was obtained for binding energies. First applications of the optimized interaction include predictions for two-nucleon correlation densities

  5. Quantified Gamow shell model interaction for p s d -shell nuclei

    DOE PAGES

    Jaganathen, Y.; Betan, R. M. Id; Michel, N.; ...

    2017-11-20

    Background: The structure of weakly bound and unbound nuclei close to particle drip lines is one of the major science drivers of nuclear physics. A comprehensive understanding of these systems goes beyond the traditional configuration interaction approach formulated in the Hilbert space of localized states (nuclear shell model) and requires an open quantum system description. The complex-energy Gamow shell model (GSM) provides such a framework as it is capable of describing resonant and nonresonant many-body states on equal footing. Purpose: To make reliable predictions, quality input is needed that allows for the full uncertainty quantification of theoretical results. In thismore » study, we carry out the optimization of an effective GSM (one-body and two-body) interaction in the psdf-shell-model space. The resulting interaction is expected to describe nuclei with 5 ≤ A ≲ 12 at the p-sd-shell interface. Method: The one-body potential of the 4He core is modeled by a Woods-Saxon + spin-orbit + Coulomb potential, and the finite-range nucleon-nucleon interaction between the valence nucleons consists of central, spin-orbit, tensor, and Coulomb terms. The GSM is used to compute key fit observables. The χ 2 optimization is performed using the Gauss-Newton algorithm augmented by the singular value decomposition technique. The resulting covariance matrix enables quantification of statistical errors within the linear regression approach. Results: The optimized one-body potential reproduces nucleon- 4He scattering phase shifts up to an excitation energy of 20 MeV. The two-body interaction built on top of the optimized one-body field is adjusted to the bound and unbound ground-state binding energies and selected excited states of the helium, lithium, and beryllium isotopes up to A = 9 . A very good agreement with experimental results was obtained for binding energies. First applications of the optimized interaction include predictions for two-nucleon correlation

  6. Validation of the PVSyst Performance Model for the Concentrix CPV Technology

    NASA Astrophysics Data System (ADS)

    Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault

    2011-12-01

    The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.

  7. Data Used in Quantified Reliability Models

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  8. Quantifying model-structure- and parameter-driven uncertainties in spring wheat phenology prediction with Bayesian analysis

    DOE PAGES

    Alderman, Phillip D.; Stanfill, Bryan

    2016-10-06

    Recent international efforts have brought renewed emphasis on the comparison of different agricultural systems models. Thus far, analysis of model-ensemble simulated results has not clearly differentiated between ensemble prediction uncertainties due to model structural differences per se and those due to parameter value uncertainties. Additionally, despite increasing use of Bayesian parameter estimation approaches with field-scale crop models, inadequate attention has been given to the full posterior distributions for estimated parameters. The objectives of this study were to quantify the impact of parameter value uncertainty on prediction uncertainty for modeling spring wheat phenology using Bayesian analysis and to assess the relativemore » contributions of model-structure-driven and parameter-value-driven uncertainty to overall prediction uncertainty. This study used a random walk Metropolis algorithm to estimate parameters for 30 spring wheat genotypes using nine phenology models based on multi-location trial data for days to heading and days to maturity. Across all cases, parameter-driven uncertainty accounted for between 19 and 52% of predictive uncertainty, while model-structure-driven uncertainty accounted for between 12 and 64%. Here, this study demonstrated the importance of quantifying both model-structure- and parameter-value-driven uncertainty when assessing overall prediction uncertainty in modeling spring wheat phenology. More generally, Bayesian parameter estimation provided a useful framework for quantifying and analyzing sources of prediction uncertainty.« less

  9. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    NASA Astrophysics Data System (ADS)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  10. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention.

    PubMed

    Silva, Alessandro P; Frère, Annie F

    2011-08-19

    Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. An interactive computer game based on virtual reality was developed to evaluate the performance of the players.The game's storyline was based on the story of an old pirate who runs across islands and dangerous seas in search of a lost treasure. Within the game, the player must find and interpret the hints scattered in different scenarios. Two versions of this game were implemented. In the first, hints and information boards were painted using red and green colours. In the second version, these objects were painted using blue and yellow colours. For modelling, texturing, and animating virtual characters and objects the three-dimensional computer graphics tool Blender 3D was used. The textures were created with the GIMP editor to provide visual effects increasing the realism and immersion of the players. The games were tested on 20 non-ADHD volunteers who were divided into two subgroups (A1 and A2) and 20 volunteers with ADHD who were divided into subgroups B1 and B2. Subgroups A1 and B1 used the first version of the game with the hints painted in green-red colors, and subgroups A2 and B2 the second version using the same hints now painted in blue-yellow. The time spent to complete each task of the game was measured. Data analyzed with ANOVA two-way and posthoc TUKEY LSD showed that the use of blue/yellow instead of green/red colors decreased the game performance of all participants. However, a greater decrease in performance could be observed with ADHD participants where tasks, that require attention, were most affected

  11. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

  12. Quantifying spillover spreading for comparing instrument performance and aiding in multicolor panel design.

    PubMed

    Nguyen, Richard; Perfetto, Stephen; Mahnke, Yolanda D; Chattopadhyay, Pratip; Roederer, Mario

    2013-03-01

    After compensation, the measurement errors arising from multiple fluorescences spilling into each detector become evident by the spreading of nominally negative distributions. Depending on the instrument configuration and performance, and reagents used, this "spillover spreading" (SS) affects sensitivity in any given parameter. The degree of SS had been predicted theoretically to increase with measurement error, i.e., by the square root of fluorescence intensity, as well as directly related to the spectral overlap matrix coefficients. We devised a metric to quantify SS between any pair of detectors. This metric is intrinsic, as it is independent of fluorescence intensity. The combination of all such values for one instrument can be represented as a spillover spreading matrix (SSM). Single-stained controls were used to determine the SSM on multiple instruments over time, and under various conditions of signal quality. SSM values reveal fluorescence spectrum interactions that can limit the sensitivity of a reagent in the presence of brightly-stained cells on a different color. The SSM was found to be highly reproducible; its non-trivial values show a CV of less than 30% across a 2-month time frame. In addition, the SSM is comparable between similarly-configured instruments; instrument-specific differences in the SSM reveal underperforming detectors. Quantifying and monitoring the SSM can be a useful tool in instrument quality control to ensure consistent sensitivity and performance. In addition, the SSM is a key element for predicting the performance of multicolor immunofluorescence panels, which will aid in the optimization and development of new panels. We propose that the SSM is a critical component of QA/QC in evaluation of flow cytometer performance. Published 2013 Wiley Periodicals, Inc.

  13. Quantifying and Reducing Curve-Fitting Uncertainty in Isc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    2015-06-14

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  14. Quantifying and modeling soil erosion and sediment export from construction sites in southern California

    NASA Astrophysics Data System (ADS)

    Wernet, A. K.; Beighley, R. E.

    2006-12-01

    Soil erosion is a power process that continuously alters the Earth's landscape. Human activities, such as construction and agricultural practices, and natural events, such as forest fires and landslides, disturb the landscape and intensify erosion processes leading to sudden increases in runoff sediment concentrations and degraded stream water quality. Understanding soil erosion and sediment transport processes is of great importance to researchers and practicing engineers, who routinely use models to predict soil erosion and sediment movement for varied land use and climate change scenarios. However, existing erosion models are limited in their applicability to constructions sites which have highly variable soil conditions (density, moisture, surface roughness, and best management practices) that change often in both space and time. The goal of this research is to improve the understanding, predictive capabilities and integration of treatment methodologies for controlling soil erosion and sediment export from construction sites. This research combines modeling with field monitoring and laboratory experiments to quantify: (a) spatial and temporal distribution of soil conditions on construction sites, (b) soil erosion due to event rainfall, and (c) potential offsite discharge of sediment with and without treatment practices. Field sites in southern California were selected to monitor the effects of common construction activities (ex., cut/fill, grading, foundations, roads) on soil conditions and sediment discharge. Laboratory experiments were performed in the Soil Erosion Research Laboratory (SERL), part of the Civil and Environmental Engineering department at San Diego State University, to quantify the impact of individual factors leading to sediment export. SERL experiments utilize a 3-m by 10-m tilting soil bed with soil depths up to 1 m, slopes ranging from 0 to 50 percent, and rainfall rates up to 150 mm/hr (6 in/hr). Preliminary modeling, field and laboratory

  15. Quantifying uncertainty in Bayesian calibrated animal-to-human PBPK models with informative prior distributions

    EPA Science Inventory

    Understanding and quantifying the uncertainty of model parameters and predictions has gained more interest in recent years with the increased use of computational models in chemical risk assessment. Fully characterizing the uncertainty in risk metrics derived from linked quantita...

  16. Using Poisson mixed-effects model to quantify transcript-level gene expression in RNA-Seq.

    PubMed

    Hu, Ming; Zhu, Yu; Taylor, Jeremy M G; Liu, Jun S; Qin, Zhaohui S

    2012-01-01

    RNA sequencing (RNA-Seq) is a powerful new technology for mapping and quantifying transcriptomes using ultra high-throughput next-generation sequencing technologies. Using deep sequencing, gene expression levels of all transcripts including novel ones can be quantified digitally. Although extremely promising, the massive amounts of data generated by RNA-Seq, substantial biases and uncertainty in short read alignment pose challenges for data analysis. In particular, large base-specific variation and between-base dependence make simple approaches, such as those that use averaging to normalize RNA-Seq data and quantify gene expressions, ineffective. In this study, we propose a Poisson mixed-effects (POME) model to characterize base-level read coverage within each transcript. The underlying expression level is included as a key parameter in this model. Since the proposed model is capable of incorporating base-specific variation as well as between-base dependence that affect read coverage profile throughout the transcript, it can lead to improved quantification of the true underlying expression level. POME can be freely downloaded at http://www.stat.purdue.edu/~yuzhu/pome.html. yuzhu@purdue.edu; zhaohui.qin@emory.edu Supplementary data are available at Bioinformatics online.

  17. A multi-scale modelling procedure to quantify hydrological impacts of upland land management

    NASA Astrophysics Data System (ADS)

    Wheater, H. S.; Jackson, B.; Bulygina, N.; Ballard, C.; McIntyre, N.; Marshall, M.; Frogbrook, Z.; Solloway, I.; Reynolds, B.

    2008-12-01

    Recent UK floods have focused attention on the effects of agricultural intensification on flood risk. However, quantification of these effects raises important methodological issues. Catchment-scale data have proved inadequate to support analysis of impacts of land management change, due to climate variability, uncertainty in input and output data, spatial heterogeneity in land use and lack of data to quantify historical changes in management practices. Manipulation experiments to quantify the impacts of land management change have necessarily been limited and small scale, and in the UK mainly focused on the lowlands and arable agriculture. There is a need to develop methods to extrapolate from small scale observations to predict catchment-scale response, and to quantify impacts for upland areas. With assistance from a cooperative of Welsh farmers, a multi-scale experimental programme has been established at Pontbren, in mid-Wales, an area of intensive sheep production. The data have been used to support development of a multi-scale modelling methodology to assess impacts of agricultural intensification and the potential for mitigation of flood risk through land use management. Data are available from replicated experimental plots under different land management treatments, from instrumented field and hillslope sites, including tree shelter belts, and from first and second order catchments. Measurements include climate variables, soil water states and hydraulic properties at multiple depths and locations, tree interception, overland flow and drainflow, groundwater levels, and streamflow from multiple locations. Fine resolution physics-based models have been developed to represent soil and runoff processes, conditioned using experimental data. The detailed models are used to calibrate simpler 'meta- models' to represent individual hydrological elements, which are then combined in a semi-distributed catchment-scale model. The methodology is illustrated using field

  18. Quantifying Uncertainty in Inverse Models of Geologic Data from Shear Zones

    NASA Astrophysics Data System (ADS)

    Davis, J. R.; Titus, S.

    2016-12-01

    We use Bayesian Markov chain Monte Carlo simulation to quantify uncertainty in inverse models of geologic data. Although this approach can be applied to many tectonic settings, field areas, and mathematical models, we focus on transpressional shear zones. The underlying forward model, either kinematic or dynamic, produces a velocity field, which predicts the dikes, foliation-lineations, crystallographic preferred orientation (CPO), shape preferred orientation (SPO), and other geologic data that should arise in the shear zone. These predictions are compared to data using modern methods of geometric statistics, including the Watson (for lines such as dike poles), isotropic matrix Fisher (for orientations such as foliation-lineations and CPO), and multivariate normal (for log-ellipsoids such as SPO) distributions. The result of the comparison is a likelihood, which is a key ingredient in the Bayesian approach. The other key ingredient is a prior distribution, which reflects the geologist's knowledge of the parameters before seeing the data. For some parameters, such as shear zone strike and dip, we identify realistic informative priors. For other parameters, where the geologist has no prior knowledge, we identify useful uninformative priors.We investigate the performance of this approach through numerical experiments on synthetic data sets. A fundamental issue is that many models of deformation exhibit asymptotic behavior (e.g., flow apophyses, fabric attractors) or periodic behavior (e.g., SPO when the clasts are rigid), which causes the likelihood to be too uniform. Based on our experiments, we offer rules of thumb for how many data, of which types, are needed to constrain deformation.

  19. Quantifying Transmission.

    PubMed

    Woolhouse, Mark

    2017-07-01

    Transmissibility is the defining characteristic of infectious diseases. Quantifying transmission matters for understanding infectious disease epidemiology and designing evidence-based disease control programs. Tracing individual transmission events can be achieved by epidemiological investigation coupled with pathogen typing or genome sequencing. Individual infectiousness can be estimated by measuring pathogen loads, but few studies have directly estimated the ability of infected hosts to transmit to uninfected hosts. Individuals' opportunities to transmit infection are dependent on behavioral and other risk factors relevant given the transmission route of the pathogen concerned. Transmission at the population level can be quantified through knowledge of risk factors in the population or phylogeographic analysis of pathogen sequence data. Mathematical model-based approaches require estimation of the per capita transmission rate and basic reproduction number, obtained by fitting models to case data and/or analysis of pathogen sequence data. Heterogeneities in infectiousness, contact behavior, and susceptibility can have substantial effects on the epidemiology of an infectious disease, so estimates of only mean values may be insufficient. For some pathogens, super-shedders (infected individuals who are highly infectious) and super-spreaders (individuals with more opportunities to transmit infection) may be important. Future work on quantifying transmission should involve integrated analyses of multiple data sources.

  20. Virtual environment to quantify the influence of colour stimuli on the performance of tasks requiring attention

    PubMed Central

    2011-01-01

    Background Recent studies indicate that the blue-yellow colour discrimination is impaired in ADHD individuals. However, the relationship between colour and performance has not been investigated. This paper describes the development and the testing of a virtual environment that is capable to quantify the influence of red-green versus blue-yellow colour stimuli on the performance of people in a fun and interactive way, being appropriate for the target audience. Methods An interactive computer game based on virtual reality was developed to evaluate the performance of the players. The game's storyline was based on the story of an old pirate who runs across islands and dangerous seas in search of a lost treasure. Within the game, the player must find and interpret the hints scattered in different scenarios. Two versions of this game were implemented. In the first, hints and information boards were painted using red and green colours. In the second version, these objects were painted using blue and yellow colours. For modelling, texturing, and animating virtual characters and objects the three-dimensional computer graphics tool Blender 3D was used. The textures were created with the GIMP editor to provide visual effects increasing the realism and immersion of the players. The games were tested on 20 non-ADHD volunteers who were divided into two subgroups (A1 and A2) and 20 volunteers with ADHD who were divided into subgroups B1 and B2. Subgroups A1 and B1 used the first version of the game with the hints painted in green-red colors, and subgroups A2 and B2 the second version using the same hints now painted in blue-yellow. The time spent to complete each task of the game was measured. Results Data analyzed with ANOVA two-way and posthoc TUKEY LSD showed that the use of blue/yellow instead of green/red colors decreased the game performance of all participants. However, a greater decrease in performance could be observed with ADHD participants where tasks, that require

  1. Using analogues to quantify geological uncertainty in stochastic reserve modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wells, B.; Brown, I.

    1995-08-01

    The petroleum industry seeks to minimize exploration risk by employing the best possible expertise, methods and tools. Is it possible to quantify the success of this process of risk reduction? Due to inherent uncertainty in predicting geological reality and due to changing environments for hydrocarbon exploration, it is not enough simply to record the proportion of successful wells drilled; in various parts of the world it has been noted that pseudo-random drilling would apparently have been as successful as the actual drilling programme. How, then, should we judge the success of risk reduction? For many years the E&P industry hasmore » routinely used Monte Carlo modelling to generate a probability distribution for prospect reserves. One aspect of Monte Carlo modelling which has received insufficient attention, but which is essential for quantifying risk reduction, is the consistency and repeatability with which predictions can be made. Reducing the subjective element inherent in the specification of geological uncertainty allows better quantification of uncertainty in the prediction of reserves, in both exploration and appraisal. Building on work reported at the AAPG annual conventions in 1994 and 1995, the present paper incorporates analogue information with uncertainty modelling. Analogues provide a major step forward in the quantification of risk, but their significance is potentially greater still. The two principal contributors to uncertainty in field and prospect analysis are the hydrocarbon life-cycle and the geometry of the trap. These are usually treated separately. Combining them into a single model is a major contribution to the reduction risk. This work is based in part on a joint project with Oryx Energy UK Ltd., and thanks are due in particular to Richard Benmore and Mike Cooper.« less

  2. Quantifying and Reducing Curve-Fitting Uncertainty in Isc: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, Mark; Duck, Benjamin; Emery, Keith

    Current-voltage (I-V) curve measurements of photovoltaic (PV) devices are used to determine performance parameters and to establish traceable calibration chains. Measurement standards specify localized curve fitting methods, e.g., straight-line interpolation/extrapolation of the I-V curve points near short-circuit current, Isc. By considering such fits as statistical linear regressions, uncertainties in the performance parameters are readily quantified. However, the legitimacy of such a computed uncertainty requires that the model be a valid (local) representation of the I-V curve and that the noise be sufficiently well characterized. Using more data points often has the advantage of lowering the uncertainty. However, more data pointsmore » can make the uncertainty in the fit arbitrarily small, and this fit uncertainty misses the dominant residual uncertainty due to so-called model discrepancy. Using objective Bayesian linear regression for straight-line fits for Isc, we investigate an evidence-based method to automatically choose data windows of I-V points with reduced model discrepancy. We also investigate noise effects. Uncertainties, aligned with the Guide to the Expression of Uncertainty in Measurement (GUM), are quantified throughout.« less

  3. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier–Stokes simulations: A data-driven, physics-informed Bayesian approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, H., E-mail: hengxiao@vt.edu; Wu, J.-L.; Wang, J.-X.

    Despite their well-known limitations, Reynolds-Averaged Navier–Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations.more » Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach

  4. Analytical performance evaluation of SAR ATR with inaccurate or estimated models

    NASA Astrophysics Data System (ADS)

    DeVore, Michael D.

    2004-09-01

    Hypothesis testing algorithms for automatic target recognition (ATR) are often formulated in terms of some assumed distribution family. The parameter values corresponding to a particular target class together with the distribution family constitute a model for the target's signature. In practice such models exhibit inaccuracy because of incorrect assumptions about the distribution family and/or because of errors in the assumed parameter values, which are often determined experimentally. Model inaccuracy can have a significant impact on performance predictions for target recognition systems. Such inaccuracy often causes model-based predictions that ignore the difference between assumed and actual distributions to be overly optimistic. This paper reports on research to quantify the effect of inaccurate models on performance prediction and to estimate the effect using only trained parameters. We demonstrate that for large observation vectors the class-conditional probabilities of error can be expressed as a simple function of the difference between two relative entropies. These relative entropies quantify the discrepancies between the actual and assumed distributions and can be used to express the difference between actual and predicted error rates. Focusing on the problem of ATR from synthetic aperture radar (SAR) imagery, we present estimators of the probabilities of error in both ideal and plug-in tests expressed in terms of the trained model parameters. These estimators are defined in terms of unbiased estimates for the first two moments of the sample statistic. We present an analytical treatment of these results and include demonstrations from simulated radar data.

  5. Webinar Presentation: Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty

    EPA Pesticide Factsheets

    This presentation, Particle-Resolved Simulations for Quantifying Black Carbon Climate Impact and Model Uncertainty, was given at the STAR Black Carbon 2016 Webinar Series: Changing Chemistry over Time held on Oct. 31, 2016.

  6. Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies

    NASA Technical Reports Server (NTRS)

    Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory

    2015-01-01

    Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, pingpong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities.The goal of this investigation is to understand the trade space associated with carrying out remotesensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically,this study seeks to understand the surface viewing geometries, ranges, etc. that are available fromseveral commonly proposed mapping orbits architectures.

  7. Quantifying Mapping Orbit Performance in the Vicinity of Primitive Bodies

    NASA Technical Reports Server (NTRS)

    Pavlak, Thomas A.; Broschart, Stephen B.; Lantoine, Gregory

    2015-01-01

    Predicting and quantifying the capability of mapping orbits in the vicinity of primitive bodies is challenging given the complex orbit geometries that exist and the irregular shape of the bodies themselves. This paper employs various quantitative metrics to characterize the performance and relative effectiveness of various types of mapping orbits including terminator, quasi-terminator, hovering, ping pong, and conic-like trajectories. Metrics of interest include surface area coverage, lighting conditions, and the variety of viewing angles achieved. The metrics discussed in this investigation are intended to enable mission designers and project stakeholders to better characterize candidate mapping orbits during preliminary mission formulation activities. The goal of this investigation is to understand the trade space associated with carrying out remote sensing campaigns at small primitive bodies in the context of a robotic space mission. Specifically, this study seeks to understand the surface viewing geometries, ranges, etc. that are available from several commonly proposed mapping orbits architectures

  8. Data driven models of the performance and repeatability of NIF high foot implosions

    NASA Astrophysics Data System (ADS)

    Gaffney, Jim; Casey, Dan; Callahan, Debbie; Hartouni, Ed; Ma, Tammy; Spears, Brian

    2015-11-01

    Recent high foot (HF) inertial confinement fusion (ICF) experiments performed at the national ignition facility (NIF) have consisted of enough laser shots that a data-driven analysis of capsule performance is feasible. In this work we use 20-30 individual implosions of similar design, spanning laser drive energies from 1.2 to 1.8 MJ, to quantify our current understanding of the behavior of HF ICF implosions. We develop a probabilistic model for the projected performance of a given implosion and use it to quantify uncertainties in predicted performance including shot-shot variations and observation uncertainties. We investigate the statistical significance of the observed performance differences between different laser pulse shapes, ablator materials, and capsule designs. Finally, using a cross-validation technique, we demonstrate that 5-10 repeated shots of a similar design are required before real trends in the data can be distinguished from shot-shot variations. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-674957.

  9. Quantifying structural states of soft mudrocks

    NASA Astrophysics Data System (ADS)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  10. Quantifying Hydro-biogeochemical Model Sensitivity in Assessment of Climate Change Effect on Hyporheic Zone Processes

    NASA Astrophysics Data System (ADS)

    Song, X.; Chen, X.; Dai, H.; Hammond, G. E.; Song, H. S.; Stegen, J.

    2016-12-01

    The hyporheic zone is an active region for biogeochemical processes such as carbon and nitrogen cycling, where the groundwater and surface water mix and interact with each other with distinct biogeochemical and thermal properties. The biogeochemical dynamics within the hyporheic zone are driven by both river water and groundwater hydraulic dynamics, which are directly affected by climate change scenarios. Besides that, the hydraulic and thermal properties of local sediments and microbial and chemical processes also play important roles in biogeochemical dynamics. Thus for a comprehensive understanding of the biogeochemical processes in the hyporheic zone, a coupled thermo-hydro-biogeochemical model is needed. As multiple uncertainty sources are involved in the integrated model, it is important to identify its key modules/parameters through sensitivity analysis. In this study, we develop a 2D cross-section model in the hyporheic zone at the DOE Hanford site adjacent to Columbia River and use this model to quantify module and parametric sensitivity on assessment of climate change. To achieve this purpose, We 1) develop a facies-based groundwater flow and heat transfer model that incorporates facies geometry and heterogeneity characterized from a field data set, 2) derive multiple reaction networks/pathways from batch experiments with in-situ samples and integrate temperate dependent reactive transport modules to the flow model, 3) assign multiple climate change scenarios to the coupled model by analyzing historical river stage data, 4) apply a variance-based global sensitivity analysis to quantify scenario/module/parameter uncertainty in hierarchy level. The objectives of the research include: 1) identifing the key control factors of the coupled thermo-hydro-biogeochemical model in the assessment of climate change, and 2) quantify the carbon consumption in different climate change scenarios in the hyporheic zone.

  11. Quantifying climate feedbacks in polar regions.

    PubMed

    Goosse, Hugues; Kay, Jennifer E; Armour, Kyle C; Bodas-Salcedo, Alejandro; Chepfer, Helene; Docquier, David; Jonko, Alexandra; Kushner, Paul J; Lecomte, Olivier; Massonnet, François; Park, Hyo-Seok; Pithan, Felix; Svensson, Gunilla; Vancoppenolle, Martin

    2018-05-15

    The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range of feedbacks, offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.

  12. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  13. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    NASA Astrophysics Data System (ADS)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  14. Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier-Stokes simulations: A data-driven, physics-informed Bayesian approach

    NASA Astrophysics Data System (ADS)

    Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.

    2016-11-01

    Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has

  15. Investigation of Fundamental Modeling and Thermal Performance Issues for a Metallic Thermal Protection System Design

    NASA Technical Reports Server (NTRS)

    Blosser, Max L.

    2002-01-01

    A study was performed to develop an understanding of the key factors that govern the performance of metallic thermal protection systems for reusable launch vehicles. A current advanced metallic thermal protection system (TPS) concept was systematically analyzed to discover the most important factors governing the thermal performance of metallic TPS. A large number of relevant factors that influence the thermal analysis and thermal performance of metallic TPS were identified and quantified. Detailed finite element models were developed for predicting the thermal performance of design variations of the advanced metallic TPS concept mounted on a simple, unstiffened structure. The computational models were also used, in an automated iterative procedure, for sizing the metallic TPS to maintain the structure below a specified temperature limit. A statistical sensitivity analysis method, based on orthogonal matrix techniques used in robust design, was used to quantify and rank the relative importance of the various modeling and design factors considered in this study. Results of the study indicate that radiation, even in small gaps between panels, can reduce significantly the thermal performance of metallic TPS, so that gaps should be eliminated by design if possible. Thermal performance was also shown to be sensitive to several analytical assumptions that should be chosen carefully. One of the factors that was found to have the greatest effect on thermal performance is the heat capacity of the underlying structure. Therefore the structure and TPS should be designed concurrently.

  16. Neural basis for generalized quantifier comprehension.

    PubMed

    McMillan, Corey T; Clark, Robin; Moore, Peachie; Devita, Christian; Grossman, Murray

    2005-01-01

    Generalized quantifiers like "all cars" are semantically well understood, yet we know little about their neural representation. Our model of quantifier processing includes a numerosity device, operations that combine number elements and working memory. Semantic theory posits two types of quantifiers: first-order quantifiers identify a number state (e.g. "at least 3") and higher-order quantifiers additionally require maintaining a number state actively in working memory for comparison with another state (e.g. "less than half"). We used BOLD fMRI to test the hypothesis that all quantifiers recruit inferior parietal cortex associated with numerosity, while only higher-order quantifiers recruit prefrontal cortex associated with executive resources like working memory. Our findings showed that first-order and higher-order quantifiers both recruit right inferior parietal cortex, suggesting that a numerosity component contributes to quantifier comprehension. Moreover, only probes of higher-order quantifiers recruited right dorsolateral prefrontal cortex, suggesting involvement of executive resources like working memory. We also observed activation of thalamus and anterior cingulate that may be associated with selective attention. Our findings are consistent with a large-scale neural network centered in frontal and parietal cortex that supports comprehension of generalized quantifiers.

  17. Strategies to Move From Conceptual Models to Quantifying Resilience in FEW Systems

    NASA Astrophysics Data System (ADS)

    Padowski, J.; Adam, J. C.; Boll, J.; Barber, M. E.; Cosens, B.; Goldsby, M.; Fortenbery, R.; Fowler, A.; Givens, J.; Guzman, C. D.; Hampton, S. E.; Harrison, J.; Huang, M.; Katz, S. L.; Kraucunas, I.; Kruger, C. E.; Liu, M.; Luri, M.; Malek, K.; Mills, A.; McLarty, D.; Pickering, N. B.; Rajagopalan, K.; Stockle, C.; Richey, A.; Voisin, N.; Witinok-Huber, B.; Yoder, J.; Yorgey, G.; Zhao, M.

    2017-12-01

    Understanding interdependencies within Food-Energy-Water (FEW) systems is critical to maintain FEW security. This project examines how coordinated management of physical (e.g., reservoirs, aquifers, and batteries) and non-physical (e.g., water markets, social capital, and insurance markets) storage systems across the three sectors promotes resilience. Coordination increases effective storage within the overall system and enhances buffering against shocks at multiple scales. System-wide resilience can be increased with innovations in technology (e.g., smart systems and energy storage) and institutions (e.g., economic systems and water law). Using the Columbia River Basin as our geographical study region, we use an integrated approach that includes a continuum of science disciplines, moving from theory to practice. In order to understand FEW linkages, we started with detailed, connected conceptual models of the food, energy, water, and social systems to identify where key interdependencies (i.e., overlaps, stocks, and flows) exist within and between systems. These are used to identify stress and opportunity points, develop innovation solutions across FEW sectors, remove barriers to the adoption of solutions, and quantify increases in system-wide resilience to regional and global change. The conceptual models act as a foundation from which we can identify key drivers, parameters, time steps, and variables of importance to build and improve existing systems dynamic and biophysical models. Our process of developing conceptual models and moving to integrated modeling is critical and serves as a foundation for coupling quantitative components with economic and social domain components and analyses of how these interact through time and space. This poster provides a description of this process that pulls together conceptual maps and integrated modeling output to quantify resilience across all three of the FEW sectors (a.k.a. "The Resilience Calculator"). Companion posters

  18. Evaluation of atmospheric nitrogen deposition model performance in the context of U.S. critical load assessments

    NASA Astrophysics Data System (ADS)

    Williams, Jason J.; Chung, Serena H.; Johansen, Anne M.; Lamb, Brian K.; Vaughan, Joseph K.; Beutel, Marc

    2017-02-01

    Air quality models are widely used to estimate pollutant deposition rates and thereby calculate critical loads and critical load exceedances (model deposition > critical load). However, model operational performance is not always quantified specifically to inform these applications. We developed a performance assessment approach designed to inform critical load and exceedance calculations, and applied it to the Pacific Northwest region of the U.S. We quantified wet inorganic N deposition performance of several widely-used air quality models, including five different Community Multiscale Air Quality Model (CMAQ) simulations, the Tdep model, and 'PRISM x NTN' model. Modeled wet inorganic N deposition estimates were compared to wet inorganic N deposition measurements at 16 National Trends Network (NTN) monitoring sites, and to annual bulk inorganic N deposition measurements at Mount Rainier National Park. Model bias (model - observed) and error (|model - observed|) were expressed as a percentage of regional critical load values for diatoms and lichens. This novel approach demonstrated that wet inorganic N deposition bias in the Pacific Northwest approached or exceeded 100% of regional diatom and lichen critical load values at several individual monitoring sites, and approached or exceeded 50% of critical loads when averaged regionally. Even models that adjusted deposition estimates based on deposition measurements to reduce bias or that spatially-interpolated measurement data, had bias that approached or exceeded critical loads at some locations. While wet inorganic N deposition model bias is only one source of uncertainty that can affect critical load and exceedance calculations, results demonstrate expressing bias as a percentage of critical loads at a spatial scale consistent with calculations may be a useful exercise for those performing calculations. It may help decide if model performance is adequate for a particular calculation, help assess confidence in

  19. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    PubMed Central

    Austin, Peter C.; van Klaveren, David; Vergouwe, Yvonne; Nieboer, Daan; Lee, Douglas S.; Steyerberg, Ewout W.

    2017-01-01

    Objective Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting We illustrated different analytic methods for validation using a sample of 14,857 patients hospitalized with heart failure at 90 hospitals in two distinct time periods. Bootstrap resampling was used to assess internal validity. Meta-analytic methods were used to assess geographic transportability. Each hospital was used once as a validation sample, with the remaining hospitals used for model derivation. Hospital-specific estimates of discrimination (c-statistic) and calibration (calibration intercepts and slopes) were pooled using random effects meta-analysis methods. I2 statistics and prediction interval width quantified geographic transportability. Temporal transportability was assessed using patients from the earlier period for model derivation and patients from the later period for model validation. Results Estimates of reproducibility, pooled hospital-specific performance, and temporal transportability were on average very similar, with c-statistics of 0.75. Between-hospital variation was moderate according to I2 statistics and prediction intervals for c-statistics. Conclusion This study illustrates how performance of prediction models can be assessed in settings with multicenter data at different time periods. PMID:27262237

  20. Modeling Tool to Quantify Metal Sources in Stormwater Discharges at Naval Facilities (NESDI Project 455)

    DTIC Science & Technology

    2014-06-01

    TECHNICAL REPORT 2077 June 2014 Modeling Tool to Quantify Metal Sources in Stormwater Discharges at Naval Facilities (NESDI Project 455... Stormwater Discharges at Naval Facilities (NESDI Project 455) Final Report and Guidance C. Katz K. Sorensen E. Arias SSC Pacific R. Pitt L. Talebi...demonstration/validation project to assess the use of the urban stormwater model Windows Source Loading and Management Model (WinSLAMM) to characterize

  1. Quantifying climate feedbacks in polar regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goosse, Hugues; Kay, Jennifer E.; Armour, Kyle C.

    The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range ofmore » feedbacks, thus offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.« less

  2. Quantifying climate feedbacks in polar regions

    DOE PAGES

    Goosse, Hugues; Kay, Jennifer E.; Armour, Kyle C.; ...

    2018-05-15

    The concept of feedback is key in assessing whether a perturbation to a system is amplified or damped by mechanisms internal to the system. In polar regions, climate dynamics are controlled by both radiative and non-radiative interactions between the atmosphere, ocean, sea ice, ice sheets and land surfaces. Precisely quantifying polar feedbacks is required for a process-oriented evaluation of climate models, a clear understanding of the processes responsible for polar climate changes, and a reduction in uncertainty associated with model projections. This quantification can be performed using a simple and consistent approach that is valid for a wide range ofmore » feedbacks, thus offering the opportunity for more systematic feedback analyses and a better understanding of polar climate changes.« less

  3. Study on Fault Diagnostics of a Turboprop Engine Using Inverse Performance Model and Artificial Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Kong, Changduk; Lim, Semyeong

    2011-12-01

    Recently, the health monitoring system of major gas path components of gas turbine uses mostly the model based method like the Gas Path Analysis (GPA). This method is to find quantity changes of component performance characteristic parameters such as isentropic efficiency and mass flow parameter by comparing between measured engine performance parameters such as temperatures, pressures, rotational speeds, fuel consumption, etc. and clean engine performance parameters without any engine faults which are calculated by the base engine performance model. Currently, the expert engine diagnostic systems using the artificial intelligent methods such as Neural Networks (NNs), Fuzzy Logic and Genetic Algorithms (GAs) have been studied to improve the model based method. Among them the NNs are mostly used to the engine fault diagnostic system due to its good learning performance, but it has a drawback due to low accuracy and long learning time to build learning data base if there are large amount of learning data. In addition, it has a very complex structure for finding effectively single type faults or multiple type faults of gas path components. This work builds inversely a base performance model of a turboprop engine to be used for a high altitude operation UAV using measured performance data, and proposes a fault diagnostic system using the base engine performance model and the artificial intelligent methods such as Fuzzy logic and Neural Network. The proposed diagnostic system isolates firstly the faulted components using Fuzzy Logic, then quantifies faults of the identified components using the NN leaned by fault learning data base, which are obtained from the developed base performance model. In leaning the NN, the Feed Forward Back Propagation (FFBP) method is used. Finally, it is verified through several test examples that the component faults implanted arbitrarily in the engine are well isolated and quantified by the proposed diagnostic system.

  4. Quantifying the Hydrodynamic Performance of an Explosively-Driven Two-Shock Source

    NASA Astrophysics Data System (ADS)

    Furlanetto, Michael; Bauer, Amy; King, Robert; Buttler, William; Olson, Russell; Hagelberg, Carl

    2015-06-01

    An explosively-driven experimental package capable of generating a tunable two-shock drive would enable a host of experiments in shock physics. To make the best use of such a platform, though, its symmetry, reproducibility, and performance must be characterized thoroughly. We report on a series of experiments on a particular two-shock design that used shock reverberation between the sample and a heavy anvil to produce a second shock. Drive package diameters were varied between 50 and 76 mm in order to investigate release wave propagation. We used proton radiography to characterize the detonation and reverberation fronts within the high explosive elements of the packages, as well as surface velocimetry to measure the resulting shock structure in the sample under study. By fielding more than twenty channels of velocimetry per shot, we were able to quantify the symmetry and reproducibility of the drive.

  5. Quantifying arm nonuse in individuals poststroke.

    PubMed

    Han, Cheol E; Kim, Sujin; Chen, Shuya; Lai, Yi-Hsuan; Lee, Jeong-Yoon; Osu, Rieko; Winstein, Carolee J; Schweighofer, Nicolas

    2013-06-01

    Arm nonuse, defined as the difference between what the individual can do when constrained to use the paretic arm and what the individual does when given a free choice to use either arm, has not yet been quantified in individuals poststroke. (1) To quantify nonuse poststroke and (2) to develop and test a novel, simple, objective, reliable, and valid instrument, the Bilateral Arm Reaching Test (BART), to quantify arm use and nonuse poststroke. First, we quantify nonuse with the Quality of Movement (QOM) subscale of the Actual Amount of Use Test (AAUT) by subtracting the AAUT QOM score in the spontaneous use condition from the AAUT QOM score in a subsequent constrained use condition. Second, we quantify arm use and nonuse with BART by comparing reaching performance to visual targets projected over a 2D horizontal hemi-work space in a spontaneous-use condition (in which participants are free to use either arm at each trial) with reaching performance in a constrained-use condition. All participants (N = 24) with chronic stroke and with mild to moderate impairment exhibited nonuse with the AAUT QOM. Nonuse with BART had excellent test-retest reliability and good external validity. BART is the first instrument that can be used repeatedly and practically in the clinic to quantify the effects of neurorehabilitation on arm use and nonuse and in the laboratory for advancing theoretical knowledge about the recovery of arm use and the development of nonuse and "learned nonuse" after stroke.

  6. Performance analysis of successive over relaxation method for solving glioma growth model

    NASA Astrophysics Data System (ADS)

    Hussain, Abida; Faye, Ibrahima; Muthuvalu, Mohana Sundaram

    2016-11-01

    Brain tumor is one of the prevalent cancers in the world that lead to death. In light of the present information of the properties of gliomas, mathematical models have been developed by scientists to quantify the proliferation and invasion dynamics of glioma. In this study, one-dimensional glioma growth model is considered, and finite difference method is used to discretize the problem. Then, two stationary methods, namely Gauss-Seidel (GS) and Successive Over Relaxation (SOR) are used to solve the governing algebraic system. The performance of the methods are evaluated in terms of number of iteration and computational time. On the basis of performance analysis, SOR method is shown to be more superior compared to GS method.

  7. Quantifying circular RNA expression from RNA-seq data using model-based framework.

    PubMed

    Li, Musheng; Xie, Xueying; Zhou, Jing; Sheng, Mengying; Yin, Xiaofeng; Ko, Eun-A; Zhou, Tong; Gu, Wanjun

    2017-07-15

    Circular RNAs (circRNAs) are a class of non-coding RNAs that are widely expressed in various cell lines and tissues of many organisms. Although the exact function of many circRNAs is largely unknown, the cell type-and tissue-specific circRNA expression has implicated their crucial functions in many biological processes. Hence, the quantification of circRNA expression from high-throughput RNA-seq data is becoming important to ascertain. Although many model-based methods have been developed to quantify linear RNA expression from RNA-seq data, these methods are not applicable to circRNA quantification. Here, we proposed a novel strategy that transforms circular transcripts to pseudo-linear transcripts and estimates the expression values of both circular and linear transcripts using an existing model-based algorithm, Sailfish. The new strategy can accurately estimate transcript expression of both linear and circular transcripts from RNA-seq data. Several factors, such as gene length, amount of expression and the ratio of circular to linear transcripts, had impacts on quantification performance of circular transcripts. In comparison to count-based tools, the new computational framework had superior performance in estimating the amount of circRNA expression from both simulated and real ribosomal RNA-depleted (rRNA-depleted) RNA-seq datasets. On the other hand, the consideration of circular transcripts in expression quantification from rRNA-depleted RNA-seq data showed substantial increased accuracy of linear transcript expression. Our proposed strategy was implemented in a program named Sailfish-cir. Sailfish-cir is freely available at https://github.com/zerodel/Sailfish-cir . tongz@medicine.nevada.edu or wanjun.gu@gmail.com. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. The likelihood of achieving quantified road safety targets: a binary logistic regression model for possible factors.

    PubMed

    Sze, N N; Wong, S C; Lee, C Y

    2014-12-01

    In past several decades, many countries have set quantified road safety targets to motivate transport authorities to develop systematic road safety strategies and measures and facilitate the achievement of continuous road safety improvement. Studies have been conducted to evaluate the association between the setting of quantified road safety targets and road fatality reduction, in both the short and long run, by comparing road fatalities before and after the implementation of a quantified road safety target. However, not much work has been done to evaluate whether the quantified road safety targets are actually achieved. In this study, we used a binary logistic regression model to examine the factors - including vehicle ownership, fatality rate, and national income, in addition to level of ambition and duration of target - that contribute to a target's success. We analyzed 55 quantified road safety targets set by 29 countries from 1981 to 2009, and the results indicate that targets that are in progress and with lower level of ambitions had a higher likelihood of eventually being achieved. Moreover, possible interaction effects on the association between level of ambition and the likelihood of success are also revealed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    NASA Astrophysics Data System (ADS)

    Anderson, T.

    2016-02-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  10. Increasing the Reliability of Circulation Model Validation: Quantifying Drifter Slip to See how Currents are Actually Moving

    NASA Astrophysics Data System (ADS)

    Anderson, T.

    2015-12-01

    Ocean circulation forecasts can help answer questions regarding larval dispersal, passive movement of injured sea animals, oil spill mitigation, and search and rescue efforts. Circulation forecasts are often validated with GPS-tracked drifter paths, but how accurately do these drifters actually move with ocean currents? Drifters are not only moved by water, but are also forced by wind and waves acting on the exposed buoy and transmitter; this imperfect movement is referred to as drifter slip. The quantification and further understanding of drifter slip will allow scientists to differentiate between drifter imperfections and actual computer model error when comparing trajectory forecasts with actual drifter tracks. This will avoid falsely accrediting all discrepancies between a trajectory forecast and an actual drifter track to computer model error. During multiple deployments of drifters in Nantucket Sound and using observed wind and wave data, we attempt to quantify the slip of drifters developed by the Northeast Fisheries Science Center's (NEFSC) Student Drifters Program. While similar studies have been conducted previously, very few have directly attached current meters to drifters to quantify drifter slip. Furthermore, none have quantified slip of NEFSC drifters relative to the oceanographic-standard "CODE" drifter. The NEFSC drifter archive has over 1000 drifter tracks primarily off the New England coast. With a better understanding of NEFSC drifter slip, modelers can reliably use these tracks for model validation.

  11. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    NASA Astrophysics Data System (ADS)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  12. Deaf Learners' Knowledge of English Universal Quantifiers

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith

    2008-01-01

    Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,…

  13. Constructing Surrogate Models of Complex Systems with Enhanced Sparsity: Quantifying the Influence of Conformational Uncertainty in Biomolecular Solvation

    DOE PAGES

    Lei, Huan; Yang, Xiu; Zheng, Bin; ...

    2015-11-05

    Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less

  14. Quantifying Parkinson's disease progression by simulating gait patterns

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Atehortúa, Angélica; Romero, Eduardo

    2015-12-01

    Modern rehabilitation protocols of most neurodegenerative diseases, in particular the Parkinson Disease, rely on a clinical analysis of gait patterns. Currently, such analysis is highly dependent on both the examiner expertise and the type of evaluation. Development of evaluation methods with objective measures is then crucial. Physical models arise as a powerful alternative to quantify movement patterns and to emulate the progression and performance of specific treatments. This work introduces a novel quantification of the Parkinson disease progression using a physical model that accurately represents the main gait biomarker, the body Center of Gravity (CoG). The model tracks the whole gait cycle by a coupled double inverted pendulum that emulates the leg swinging for the single support phase and by a damper-spring System (SDP) that recreates both legs in contact with the ground for the double phase. The patterns generated by the proposed model are compared with actual ones learned from 24 subjects in stages 2,3, and 4. The evaluation performed demonstrates a better performance of the proposed model when compared with a baseline model(SP) composed of a coupled double pendulum and a mass-spring system. The Frechet distance measured differences between model estimations and real trajectories, showing for stages 2, 3 and 4 distances of 0.137, 0.155, 0.38 for the baseline and 0.07, 0.09, 0.29 for the proposed method.

  15. Quantifying the predictive consequences of model error with linear subspace analysis

    USGS Publications Warehouse

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  16. Qalibra: a general model for food risk-benefit assessment that quantifies variability and uncertainty.

    PubMed

    Hart, Andy; Hoekstra, Jeljer; Owen, Helen; Kennedy, Marc; Zeilmaker, Marco J; de Jong, Nynke; Gunnlaugsdottir, Helga

    2013-04-01

    The EU project BRAFO proposed a framework for risk-benefit assessment of foods, or changes in diet, that present both potential risks and potential benefits to consumers (Hoekstra et al., 2012a). In higher tiers of the BRAFO framework, risks and benefits are integrated quantitatively to estimate net health impact measured in DALYs or QALYs (disability- or quality-adjusted life years). This paper describes a general model that was developed by a second EU project, Qalibra, to assist users in conducting these assessments. Its flexible design makes it applicable to a wide range of dietary questions involving different nutrients, contaminants and health effects. Account can be taken of variation between consumers in their diets and also other characteristics relevant to the estimation of risk and benefit, such as body weight, gender and age. Uncertainty in any input parameter may be quantified probabilistically, using probability distributions, or deterministically by repeating the assessment with alternative assumptions. Uncertainties that are not quantified should be evaluated qualitatively. Outputs produced by the model are illustrated using results from a simple assessment of fish consumption. More detailed case studies on oily fish and phytosterols are presented in companion papers. The model can be accessed as web-based software at www.qalibra.eu. Copyright © 2012. Published by Elsevier Ltd.

  17. Quantifying Faculty Productivity in Japan: Development and Application of the Achievement-Motivated Key Performance Indicator. Research & Occasional Paper Series: CSHE.8.16

    ERIC Educational Resources Information Center

    Aida, Misako; Watanabe, Satoshi P.

    2016-01-01

    Universities throughout the world are trending toward more performance based methods to capture their strengths, weaknesses and productivity. Hiroshima University has developed an integrated objective measure for quantifying multifaceted faculty activities, namely the "Achievement-Motivated Key Performance Indicator" (A-KPI), in order to…

  18. Quantifying the net social benefits of vehicle trip reductions : guidance for customizing the TRIMMS(c) model.

    DOT National Transportation Integrated Search

    2009-04-01

    This study details the development of a series of enhancements to the Trip Reduction Impacts of : Mobility Management Strategies (TRIMMS) model. TRIMMS allows quantifying the net social : benefits of a wide range of transportation demand management...

  19. New methods to quantify the cracking performance of cementitious systems made with internal curing

    NASA Astrophysics Data System (ADS)

    Schlitter, John L.

    The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring

  20. Direct numerical simulations in solid mechanics for quantifying the macroscale effects of microstructure and material model-form error

    DOE PAGES

    Bishop, Joseph E.; Emery, John M.; Battaile, Corbett C.; ...

    2016-03-16

    Two fundamental approximations in macroscale solid-mechanics modeling are (1) the assumption of scale separation in homogenization theory and (2) the use of a macroscopic plasticity material model that represents, in a mean sense, the multitude of inelastic processes occurring at the microscale. With the goal of quantifying the errors induced by these approximations on engineering quantities of interest, we perform a set of direct numerical simulations (DNS) in which polycrystalline microstructures are embedded throughout a macroscale structure. The largest simulations model over 50,000 grains. The microstructure is idealized using a randomly close-packed Voronoi tessellation in which each polyhedral Voronoi cellmore » represents a grain. An face centered cubic crystal-plasticity model is used to model the mechanical response of each grain. The overall grain structure is equiaxed, and each grain is randomly oriented with no overall texture. The detailed results from the DNS simulations are compared to results obtained from conventional macroscale simulations that use homogeneous isotropic plasticity models. The macroscale plasticity models are calibrated using a representative volume element of the idealized microstructure. Furthermore, we envision that DNS modeling will be used to gain new insights into the mechanics of material deformation and failure.« less

  1. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  2. Simplified model of pinhole imaging for quantifying systematic errors in image shape

    DOE PAGES

    Benedetti, Laura Robin; Izumi, N.; Khan, S. F.; ...

    2017-10-30

    In this paper, we examine systematic errors in x-ray imaging by pinhole optics for quantifying uncertainties in the measurement of convergence and asymmetry in inertial confinement fusion implosions. We present a quantitative model for the total resolution of a pinhole optic with an imaging detector that more effectively describes the effect of diffraction than models that treat geometry and diffraction as independent. This model can be used to predict loss of shape detail due to imaging across the transition from geometric to diffractive optics. We find that fractional error in observable shapes is proportional to the total resolution element wemore » present and inversely proportional to the length scale of the asymmetry being observed. Finally, we have experimentally validated our results by imaging a single object with differently sized pinholes and with different magnifications.« less

  3. Quantifying Seasonal Skill in Coupled Sea Ice Models using Freeboard Measurements from Spaceborne Laser Altimeters

    NASA Astrophysics Data System (ADS)

    Roberts, A.; Bench, K.; Maslowski, W.; Farrell, S. L.; Richter-Menge, J.

    2016-12-01

    We have developed a method to quantitatively assess the skill of predictive sea ice models using freeboard measurements from spaceborne laser altimeters. The method evaluates freeboard from the Regional Arctic System Model (RASM) against those derived from NASA ICESat and Operation IceBridge (OIB) missions along individual ground tracks, and assesses the variance- and correlation-weighted model skill. This allows quantifying the accuracy of sea ice volume simulations and taking measurement error into account. As part of this work, we inter-compare simulations with two different sea ice rheologies: one using Elastic-Viscous-Plastic (EVP), and the other using Elastic-Anisotropic-Plastic (EAP) ice mechanics. Both are simulated for 2004 and 2007, during which ICESat was in operation. RASM variance skill scores ranged from 0.712 to 0.824 and correlation skill scores were between 0.319 and 0.511, with EAP providing a better estimate of spatial ice volume variance, but with a larger bias in the central Arctic relative to EVP. The skill scores were calculated for monthly periods and require little adaption to rate short-term operational forecasts of the Arctic. This work will help quantify model limitations and facilitate optimal use of ICESat-2 freeboard measurements after that satellite is launched next year.

  4. Nanoscale Experimental Characterization and 3D Mechanistic Modeling of Shale with Quantified Heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, K. C.; Borja, R. I.

    2014-12-01

    Shale is a fine-grained sedimentary rock consisting primarily of clay and silt, and is of particular interest with respect to hydrocarbon production as both a source and seal rock. The deformation and fracture properties of shale depend on the mechanical properties of its basic constituents, including solid clay particles, inclusions such as silt and organics, and multiscale porosity. This paper presents the results of a combined experimental/numerical investigation into the mechanical behavior of shale at the nanoscale. Large grids of nanoindentation tests, spanning various length scales ranging from 200-20000 nanometers deep, were performed on a sample of Woodford shale in both the bedding plane normal (BPN) and bedding plane parallel (BPP) directions. The nanoindentions were performed in order to determine the mechanical properties of the constituent materials in situ as well as those of the highly heterogeneous composite material at this scale. Focused ion beam (FIB) milling and scanning electron microscopy (SEM) were used in conjunction (FIB-SEM) to obtain 2D and 3D images characterizing the heterogeneity of the shale at this scale. The constituent materials were found to be best described as consisting of near micrometer size clay and silt particles embedded in a mixed organic/clay matrix, with some larger (near 10 micrometers in diameter) pockets of organic material evident. Indented regions were identified through SEM, allowing the 200-1000 nanometer deep indentations to be classified according to the constituent materials which they engaged. We use nonlinear finite element modeling to capture results of low-load (on the order of milliNewtons) and high-load (on the order of a few Newtons) nanoindentation tests. Experimental results are used to develop a 3D mechanistic model that interprets the results of nanoindentation tests on specimens of Woodford shale with quantified heterogeneity.

  5. Leveraging 3D-HST Grism Redshifts to Quantify Photometric Redshift Performance

    NASA Astrophysics Data System (ADS)

    Bezanson, Rachel; Wake, David A.; Brammer, Gabriel B.; van Dokkum, Pieter G.; Franx, Marijn; Labbé, Ivo; Leja, Joel; Momcheva, Ivelina G.; Nelson, Erica J.; Quadri, Ryan F.; Skelton, Rosalind E.; Weiner, Benjamin J.; Whitaker, Katherine E.

    2016-05-01

    We present a study of photometric redshift accuracy in the 3D-HST photometric catalogs, using 3D-HST grism redshifts to quantify and dissect trends in redshift accuracy for galaxies brighter than JH IR > 24 with an unprecedented and representative high-redshift galaxy sample. We find an average scatter of 0.0197 ± 0.0003(1 + z) in the Skelton et al. photometric redshifts. Photometric redshift accuracy decreases with magnitude and redshift, but does not vary monotonically with color or stellar mass. The 1σ scatter lies between 0.01 and 0.03 (1 + z) for galaxies of all masses and colors below z < 2.5 (for JH IR < 24), with the exception of a population of very red (U - V > 2), dusty star-forming galaxies for which the scatter increases to ˜0.1 (1 + z). We find that photometric redshifts depend significantly on galaxy size; the largest galaxies at fixed magnitude have photo-zs with up to ˜30% more scatter and ˜5 times the outlier rate. Although the overall photometric redshift accuracy for quiescent galaxies is better than that for star-forming galaxies, scatter depends more strongly on magnitude and redshift than on galaxy type. We verify these trends using the redshift distributions of close pairs and extend the analysis to fainter objects, where photometric redshift errors further increase to ˜0.046 (1 + z) at {H}F160W=26. We demonstrate that photometric redshift accuracy is strongly filter dependent and quantify the contribution of multiple filter combinations. We evaluate the widths of redshift probability distribution functions and find that error estimates are underestimated by a factor of ˜1.1-1.6, but that uniformly broadening the distribution does not adequately account for fitting outliers. Finally, we suggest possible applications of these data in planning for current and future surveys and simulate photometric redshift performance in the Large Synoptic Survey Telescope, Dark Energy Survey (DES), and combined DES and Vista Hemisphere surveys.

  6. Instruction-level performance modeling and characterization of multimedia applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Cameron, K.W.

    1999-06-01

    One of the challenges for characterizing and modeling realistic multimedia applications is the lack of access to source codes. On-chip performance counters effectively resolve this problem by monitoring run-time behaviors at the instruction-level. This paper presents a novel technique of characterizing and modeling workloads at the instruction level for realistic multimedia applications using hardware performance counters. A variety of instruction counts are collected from some multimedia applications, such as RealPlayer, GSM Vocoder, MPEG encoder/decoder, and speech synthesizer. These instruction counts can be used to form a set of abstract characteristic parameters directly related to a processor`s architectural features. Based onmore » microprocessor architectural constraints and these calculated abstract parameters, the architectural performance bottleneck for a specific application can be estimated. Meanwhile, the bottleneck estimation can provide suggestions about viable architectural/functional improvement for certain workloads. The biggest advantage of this new characterization technique is a better understanding of processor utilization efficiency and architectural bottleneck for each application. This technique also provides predictive insight of future architectural enhancements and their affect on current codes. In this paper the authors also attempt to model architectural effect on processor utilization without memory influence. They derive formulas for calculating CPI{sub 0}, CPI without memory effect, and they quantify utilization of architectural parameters. These equations are architecturally diagnostic and predictive in nature. Results provide promise in code characterization, and empirical/analytical modeling.« less

  7. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  8. Performance of signal-to-noise ratio estimation for scanning electron microscope using autocorrelation Levinson-Durbin recursion model.

    PubMed

    Sim, K S; Lim, M S; Yeap, Z X

    2016-07-01

    A new technique to quantify signal-to-noise ratio (SNR) value of the scanning electron microscope (SEM) images is proposed. This technique is known as autocorrelation Levinson-Durbin recursion (ACLDR) model. To test the performance of this technique, the SEM image is corrupted with noise. The autocorrelation function of the original image and the noisy image are formed. The signal spectrum based on the autocorrelation function of image is formed. ACLDR is then used as an SNR estimator to quantify the signal spectrum of noisy image. The SNR values of the original image and the quantified image are calculated. The ACLDR is then compared with the three existing techniques, which are nearest neighbourhood, first-order linear interpolation and nearest neighbourhood combined with first-order linear interpolation. It is shown that ACLDR model is able to achieve higher accuracy in SNR estimation. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  9. A mixing-model approach to quantifying sources of organic matter to salt marsh sediments

    NASA Astrophysics Data System (ADS)

    Bowles, K. M.; Meile, C. D.

    2010-12-01

    Salt marshes are highly productive ecosystems, where autochthonous production controls an intricate exchange of carbon and energy among organisms. The major sources of organic carbon to these systems include 1) autochthonous production by vascular plant matter, 2) import of allochthonous plant material, and 3) phytoplankton biomass. Quantifying the relative contribution of organic matter sources to a salt marsh is important for understanding the fate and transformation of organic carbon in these systems, which also impacts the timing and magnitude of carbon export to the coastal ocean. A common approach to quantify organic matter source contributions to mixtures is the use of linear mixing models. To estimate the relative contributions of endmember materials to total organic matter in the sediment, the problem is formulated as a constrained linear least-square problem. However, the type of data that is utilized in such mixing models, the uncertainties in endmember compositions and the temporal dynamics of non-conservative entitites can have varying affects on the results. Making use of a comprehensive data set that encompasses several endmember characteristics - including a yearlong degradation experiment - we study the impact of these factors on estimates of the origin of sedimentary organic carbon in a saltmarsh located in the SE United States. We first evaluate the sensitivity of linear mixing models to the type of data employed by analyzing a series of mixing models that utilize various combinations of parameters (i.e. endmember characteristics such as δ13COC, C/N ratios or lignin content). Next, we assess the importance of using more than the minimum number of parameters required to estimate endmember contributions to the total organic matter pool. Then, we quantify the impact of data uncertainty on the outcome of the analysis using Monte Carlo simulations and accounting for the uncertainty in endmember characteristics. Finally, as biogeochemical processes

  10. Quantifying changes in water use and groundwater availability in a megacity using novel integrated systems modeling

    NASA Astrophysics Data System (ADS)

    Hyndman, D. W.; Xu, T.; Deines, J. M.; Cao, G.; Nagelkirk, R.; Viña, A.; McConnell, W.; Basso, B.; Kendall, A. D.; Li, S.; Luo, L.; Lupi, F.; Ma, D.; Winkler, J. A.; Yang, W.; Zheng, C.; Liu, J.

    2017-08-01

    Water sustainability in megacities is a growing challenge with far-reaching effects. Addressing sustainability requires an integrated, multidisciplinary approach able to capture interactions among hydrology, population growth, and socioeconomic factors and to reflect changes due to climate variability and land use. We developed a new systems modeling framework to quantify the influence of changes in land use, crop growth, and urbanization on groundwater storage for Beijing, China. This framework was then used to understand and quantify causes of observed decreases in groundwater storage from 1993 to 2006, revealing that the expansion of Beijing's urban areas at the expense of croplands has enhanced recharge while reducing water lost to evapotranspiration, partially ameliorating groundwater declines. The results demonstrate the efficacy of such a systems approach to quantify the impacts of changes in climate and land use on water sustainability for megacities, while providing a quantitative framework to improve mitigation and adaptation strategies that can help address future water challenges.

  11. Chimpanzees (Pan troglodytes) and bonobos (Pan paniscus) quantify split solid objects.

    PubMed

    Cacchione, Trix; Hrubesch, Christine; Call, Josep

    2013-01-01

    Recent research suggests that gorillas' and orangutans' object representations survive cohesion violations (e.g., a split of a solid object into two halves), but that their processing of quantities may be affected by them. We assessed chimpanzees' (Pan troglodytes) and bonobos' (Pan paniscus) reactions to various fission events in the same series of action tasks modelled after infant studies previously run on gorillas and orangutans (Cacchione and Call in Cognition 116:193-203, 2010b). Results showed that all four non-human great ape species managed to quantify split objects but that their performance varied as a function of the non-cohesiveness produced in the splitting event. Spatial ambiguity and shape invariance had the greatest impact on apes' ability to represent and quantify objects. Further, we observed species differences with gorillas performing lower than other species. Finally, we detected a substantial age effect, with ape infants below 6 years of age being outperformed by both juvenile/adolescent and adult apes.

  12. Energetic arousal and language: predictions from the computational theory of quantifiers processing.

    PubMed

    Zajenkowski, Marcin

    2013-10-01

    The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers. Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence-picture verification task. In the present study, 201 students were presented with a sentence-picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants' EA was measured before and after the verification task. It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship. This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA-cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate. The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.

  13. Improved methods for the measurement and modeling of PV module and system performance for all operating conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    King, D.L.

    1995-11-01

    The objective of this work was to develop improved performance model for modules and systems for for all operating conditions for use in module specifications, system and BOS component design, and system rating or monitoring. The approach taken was to identify and quantify the influence of dominant factors of solar irradiance, cell temperature, angle-of-incidence; and solar spectrum; use outdoor test procedures to separate the effects of electrical, thermal, and optical performance; use fundamental cell characteristics to improve analysis; and combine factors in simple model using the common variables.

  14. Comparison of Five Modeling Approaches to Quantify and ...

    EPA Pesticide Factsheets

    A generally accepted value for the Radiation Amplification Factor (RAF), with respect to the erythemal action spectrum for sunburn of human skin, is −1.1, indicating that a 1.0% increase in stratospheric ozone leads to a 1.1% decrease in the biologically damaging UV radiation in the erythemal action spectrum reaching the Earth. The RAF is used to quantify the non-linear change in the biologically damaging UV radiation in the erythemal action spectrum as a function of total column ozone (O3). Spectrophotometer measurements recorded at ten US monitoring sites were used in this analysis, and over 71,000 total UVR measurement scans of the sky were collected at those 10 sites between 1998 and 2000 to assess the RAF value. This UVR dataset was examined to determine the specific impact of clouds on the RAF. Five de novo modeling approaches were used on the dataset, and the calculated RAF values ranged from a low of −0.80 to a high of −1.38. To determine the impact of clouds on RAF, which is an indicator of the amount of UV radiation reaching the earth which can affect sunburn of human skin.

  15. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  16. Children's interpretations of general quantifiers, specific quantifiers, and generics

    PubMed Central

    Gelman, Susan A.; Leslie, Sarah-Jane; Was, Alexandra M.; Koch, Christina M.

    2014-01-01

    Recently, several scholars have hypothesized that generics are a default mode of generalization, and thus that young children may at first treat quantifiers as if they were generic in meaning. To address this issue, the present experiment provides the first in-depth, controlled examination of the interpretation of generics compared to both general quantifiers ("all Xs", "some Xs") and specific quantifiers ("all of these Xs", "some of these Xs"). We provided children (3 and 5 years) and adults with explicit frequency information regarding properties of novel categories, to chart when "some", "all", and generics are deemed appropriate. The data reveal three main findings. First, even 3-year-olds distinguish generics from quantifiers. Second, when children make errors, they tend to be in the direction of treating quantifiers like generics. Third, children were more accurate when interpreting specific versus general quantifiers. We interpret these data as providing evidence for the position that generics are a default mode of generalization, especially when reasoning about kinds. PMID:25893205

  17. Modelling the multidimensional niche by linking functional traits to competitive performance

    PubMed Central

    Maynard, Daniel S.; Leonard, Kenneth E.; Drake, John M.; Hall, David W.; Crowther, Thomas W.; Bradford, Mark A.

    2015-01-01

    Linking competitive outcomes to environmental conditions is necessary for understanding species' distributions and responses to environmental change. Despite this importance, generalizable approaches for predicting competitive outcomes across abiotic gradients are lacking, driven largely by the highly complex and context-dependent nature of biotic interactions. Here, we present and empirically test a novel niche model that uses functional traits to model the niche space of organisms and predict competitive outcomes of co-occurring populations across multiple resource gradients. The model makes no assumptions about the underlying mode of competition and instead applies to those settings where relative competitive ability across environments correlates with a quantifiable performance metric. To test the model, a series of controlled microcosm experiments were conducted using genetically related strains of a widespread microbe. The model identified trait microevolution and performance differences among strains, with the predicted competitive ability of each organism mapped across a two-dimensional carbon and nitrogen resource space. Areas of coexistence and competitive dominance between strains were identified, and the predicted competitive outcomes were validated in approximately 95% of the pairings. By linking trait variation to competitive ability, our work demonstrates a generalizable approach for predicting and modelling competitive outcomes across changing environmental contexts. PMID:26136444

  18. Correlation between human observer performance and model observer performance in differential phase contrast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Ke; Garrett, John; Chen, Guang-Hong

    2013-11-15

    Purpose: With the recently expanding interest and developments in x-ray differential phase contrast CT (DPC-CT), the evaluation of its task-specific detection performance and comparison with the corresponding absorption CT under a given radiation dose constraint become increasingly important. Mathematical model observers are often used to quantify the performance of imaging systems, but their correlations with actual human observers need to be confirmed for each new imaging method. This work is an investigation of the effects of stochastic DPC-CT noise on the correlation of detection performance between model and human observers with signal-known-exactly (SKE) detection tasks.Methods: The detectabilities of different objectsmore » (five disks with different diameters and two breast lesion masses) embedded in an experimental DPC-CT noise background were assessed using both model and human observers. The detectability of the disk and lesion signals was then measured using five types of model observers including the prewhitening ideal observer, the nonprewhitening (NPW) observer, the nonprewhitening observer with eye filter and internal noise (NPWEi), the prewhitening observer with eye filter and internal noise (PWEi), and the channelized Hotelling observer (CHO). The same objects were also evaluated by four human observers using the two-alternative forced choice method. The results from the model observer experiment were quantitatively compared to the human observer results to assess the correlation between the two techniques.Results: The contrast-to-detail (CD) curve generated by the human observers for the disk-detection experiments shows that the required contrast to detect a disk is inversely proportional to the square root of the disk size. Based on the CD curves, the ideal and NPW observers tend to systematically overestimate the performance of the human observers. The NPWEi and PWEi observers did not predict human performance well either, as the slopes of

  19. A mathematical method for quantifying in vivo mechanical behaviour of heel pad under dynamic load.

    PubMed

    Naemi, Roozbeh; Chatzistergos, Panagiotis E; Chockalingam, Nachiappan

    2016-03-01

    Mechanical behaviour of the heel pad, as a shock attenuating interface during a foot strike, determines the loading on the musculoskeletal system during walking. The mathematical models that describe the force deformation relationship of the heel pad structure can determine the mechanical behaviour of heel pad under load. Hence, the purpose of this study was to propose a method of quantifying the heel pad stress-strain relationship using force-deformation data from an indentation test. The energy input and energy returned densities were calculated by numerically integrating the area below the stress-strain curve during loading and unloading, respectively. Elastic energy and energy absorbed densities were calculated as the sum of and the difference between energy input and energy returned densities, respectively. By fitting the energy function, derived from a nonlinear viscoelastic model, to the energy density-strain data, the elastic and viscous model parameters were quantified. The viscous and elastic exponent model parameters were significantly correlated with maximum strain, indicating the need to perform indentation tests at realistic maximum strains relevant to walking. The proposed method showed to be able to differentiate between the elastic and viscous components of the heel pad response to loading and to allow quantifying the corresponding stress-strain model parameters.

  20. Flame-Vortex Studies to Quantify Markstein Numbers Needed to Model Flame Extinction Limits

    NASA Technical Reports Server (NTRS)

    Driscoll, James F.; Feikema, Douglas A.

    2003-01-01

    This has quantified a database of Markstein numbers for unsteady flames; future work will quantify a database of flame extinction limits for unsteady conditions. Unsteady extinction limits have not been documented previously; both a stretch rate and a residence time must be measured, since extinction requires that the stretch rate be sufficiently large for a sufficiently long residence time. Ma was measured for an inwardly-propagating flame (IPF) that is negatively-stretched under microgravity conditions. Computations also were performed using RUN-1DL to explain the measurements. The Markstein number of an inwardly-propagating flame, for both the microgravity experiment and the computations, is significantly larger than that of an outwardy-propagating flame. The computed profiles of the various species within the flame suggest reasons. Computed hydrogen concentrations build up ahead of the IPF but not the OPF. Understanding was gained by running the computations for both simplified and full-chemistry conditions. Numerical Simulations. To explain the experimental findings, numerical simulations of both inwardly and outwardly propagating spherical flames (with complex chemistry) were generated using the RUN-1DL code, which includes 16 species and 46 reactions.

  1. Quantifying the uncertainty in heritability.

    PubMed

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-05-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.

  2. Quantifying the uncertainty in heritability

    PubMed Central

    Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph

    2014-01-01

    The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270

  3. Quantifying Tropical Glacier Mass Balance Sensitivity to Climate Change Through Regional-Scale Modeling and The Randolph Glacier Inventory

    NASA Astrophysics Data System (ADS)

    Malone, A.

    2017-12-01

    Quantifying mass balance sensitivity to climate change is essential for forecasting glacier evolution and deciphering climate signals embedded in archives of past glacier changes. Ideally, these quantifications result from decades of field measurement, remote sensing, and a hierarchy modeling approach, but in data-sparse regions, such as the Himalayas and tropical Andes, regional-scale modeling rooted in first principles provides a first-order picture. Previous regional-scaling modeling studies have applied a surface energy and mass balance approach in order to quantify equilibrium line altitude sensitivity to climate change. In this study, an expanded regional-scale surface energy and mass balance model is implemented to quantify glacier-wide mass balance sensitivity to climate change for tropical Andean glaciers. Data from the Randolph Glacier Inventory are incorporated, and additional physical processes are included, such as a dynamic albedo and cloud-dependent atmospheric emissivity. The model output agrees well with the limited mass balance records for tropical Andean glaciers. The dominant climate variables driving interannual mass balance variability differ depending on the climate setting. For wet tropical glaciers (annual precipitation >0.75 m y-1), temperature is the dominant climate variable. Different hypotheses for the processes linking wet tropical glacier mass balance variability to temperature are evaluated. The results support the hypothesis that glacier-wide mass balance on wet tropical glaciers is largely dominated by processes at the lowest elevation where temperature plays a leading role in energy exchanges. This research also highlights the transient nature of wet tropical glaciers - the vast majority of tropical glaciers and a vital regional water resource - in an anthropogenic warming world.

  4. A framework for quantifying net benefits of alternative prognostic models‡

    PubMed Central

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  5. Quantifying time-varying cellular secretions with local linear models.

    PubMed

    Byers, Jeff M; Christodoulides, Joseph A; Delehanty, James B; Raghu, Deepa; Raphael, Marc P

    2017-07-01

    Extracellular protein concentrations and gradients initiate a wide range of cellular responses, such as cell motility, growth, proliferation and death. Understanding inter-cellular communication requires spatio-temporal knowledge of these secreted factors and their causal relationship with cell phenotype. Techniques which can detect cellular secretions in real time are becoming more common but generalizable data analysis methodologies which can quantify concentration from these measurements are still lacking. Here we introduce a probabilistic approach in which local-linear models and the law of mass action are applied to obtain time-varying secreted concentrations from affinity-based biosensor data. We first highlight the general features of this approach using simulated data which contains both static and time-varying concentration profiles. Next we apply the technique to determine concentration of secreted antibodies from 9E10 hybridoma cells as detected using nanoplasmonic biosensors. A broad range of time-dependent concentrations was observed: from steady-state secretions of 230 pM near the cell surface to large transients which reached as high as 56 nM over several minutes and then dissipated.

  6. Quantifying the signals contained in heterogeneous neural responses and determining their relationships with task performance

    PubMed Central

    Pagan, Marino

    2014-01-01

    The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance (d′). PMID:24920017

  7. Use of a vision model to quantify the significance of factors effecting target conspicuity

    NASA Astrophysics Data System (ADS)

    Gilmore, M. A.; Jones, C. K.; Haynes, A. W.; Tolhurst, D. J.; To, M.; Troscianko, T.; Lovell, P. G.; Parraga, C. A.; Pickavance, K.

    2006-05-01

    When designing camouflage it is important to understand how the human visual system processes the information to discriminate the target from the background scene. A vision model has been developed to compare two images and detect differences in local contrast in each spatial frequency channel. Observer experiments are being undertaken to validate this vision model so that the model can be used to quantify the relative significance of different factors affecting target conspicuity. Synthetic imagery can be used to design improved camouflage systems. The vision model is being used to compare different synthetic images to understand what features in the image are important to reproduce accurately and to identify the optimum way to render synthetic imagery for camouflage effectiveness assessment. This paper will describe the vision model and summarise the results obtained from the initial validation tests. The paper will also show how the model is being used to compare different synthetic images and discuss future work plans.

  8. Quantifying transfer after perceptual-motor sequence learning: how inflexible is implicit learning?

    PubMed

    Sanchez, Daniel J; Yarnik, Eric N; Reber, Paul J

    2015-03-01

    Studies of implicit perceptual-motor sequence learning have often shown learning to be inflexibly tied to the training conditions during learning. Since sequence learning is seen as a model task of skill acquisition, limits on the ability to transfer knowledge from the training context to a performance context indicates important constraints on skill learning approaches. Lack of transfer across contexts has been demonstrated by showing that when task elements are changed following training, this leads to a disruption in performance. These results have typically been taken as suggesting that the sequence knowledge relies on integrated representations across task elements (Abrahamse, Jiménez, Verwey, & Clegg, Psychon Bull Rev 17:603-623, 2010a). Using a relatively new sequence learning task, serial interception sequence learning, three experiments are reported that quantify this magnitude of performance disruption after selectively manipulating individual aspects of motor performance or perceptual information. In Experiment 1, selective disruption of the timing or order of sequential actions was examined using a novel response manipulandum that allowed for separate analysis of these two motor response components. In Experiments 2 and 3, transfer was examined after selective disruption of perceptual information that left the motor response sequence intact. All three experiments provided quantifiable estimates of partial transfer to novel contexts that suggest some level of information integration across task elements. However, the ability to identify quantifiable levels of successful transfer indicates that integration is not all-or-none and that measurement sensitivity is a key in understanding sequence knowledge representations.

  9. Quantifying transfer after perceptual-motor sequence learning: how inflexible is implicit learning?

    PubMed Central

    Sanchez, Daniel J.; Yarnik, Eric N.

    2015-01-01

    Studies of implicit perceptual-motor sequence learning have often shown learning to be inflexibly tied to the training conditions during learning. Since sequence learning is seen as a model task of skill acquisition, limits on the ability to transfer knowledge from the training context to a performance context indicates important constraints on skill learning approaches. Lack of transfer across contexts has been demonstrated by showing that when task elements are changed following training, this leads to a disruption in performance. These results have typically been taken as suggesting that the sequence knowledge relies on integrated representations across task elements (Abrahamse, Jiménez, Verwey, & Clegg, Psychon Bull Rev 17:603–623, 2010a). Using a relatively new sequence learning task, serial interception sequence learning, three experiments are reported that quantify this magnitude of performance disruption after selectively manipulating individual aspects of motor performance or perceptual information. In Experiment 1, selective disruption of the timing or order of sequential actions was examined using a novel response manipulandum that allowed for separate analysis of these two motor response components. In Experiments 2 and 3, transfer was examined after selective disruption of perceptual information that left the motor response sequence intact. All three experiments provided quantifiable estimates of partial transfer to novel contexts that suggest some level of information integration across task elements. However, the ability to identify quantifiable levels of successful transfer indicates that integration is not all-or-none and that measurement sensitivity is a key in understanding sequence knowledge representations. PMID:24668505

  10. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  11. Quantifying loopy network architectures.

    PubMed

    Katifori, Eleni; Magnasco, Marcelo O

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs.

  12. Quantifying parameter uncertainty in stochastic models using the Box Cox transformation

    NASA Astrophysics Data System (ADS)

    Thyer, Mark; Kuczera, George; Wang, Q. J.

    2002-08-01

    The Box-Cox transformation is widely used to transform hydrological data to make it approximately Gaussian. Bayesian evaluation of parameter uncertainty in stochastic models using the Box-Cox transformation is hindered by the fact that there is no analytical solution for the posterior distribution. However, the Markov chain Monte Carlo method known as the Metropolis algorithm can be used to simulate the posterior distribution. This method properly accounts for the nonnegativity constraint implicit in the Box-Cox transformation. Nonetheless, a case study using the AR(1) model uncovered a practical problem with the implementation of the Metropolis algorithm. The use of a multivariate Gaussian jump distribution resulted in unacceptable convergence behaviour. This was rectified by developing suitable parameter transformations for the mean and variance of the AR(1) process to remove the strong nonlinear dependencies with the Box-Cox transformation parameter. Applying this methodology to the Sydney annual rainfall data and the Burdekin River annual runoff data illustrates the efficacy of these parameter transformations and demonstrate the value of quantifying parameter uncertainty.

  13. Quantifying spontaneous metastasis in a syngeneic mouse melanoma model using real time PCR.

    PubMed

    Deng, Wentao; McLaughlin, Sarah L; Klinke, David J

    2017-08-07

    Modeling metastasis in vivo with animals is a priority for both revealing mechanisms of tumor dissemination and developing therapeutic methods. While conventional intravenous injection of tumor cells provides an efficient and consistent system for studying tumor cell extravasation and colonization, studying spontaneous metastasis derived from orthotopic tumor sites has the advantage of modeling more aspects of the metastatic cascade, but is challenging as it is difficult to detect small numbers of metastatic cells. In this work, we developed an approach for quantifying spontaneous metastasis in the syngeneic mouse B16 system using real time PCR. We first transduced B16 cells with lentivirus expressing firefly luciferase Luc2 gene for bioluminescence imaging. Next, we developed a real time quantitative PCR (qPCR) method for the detection of luciferase-expressing, metastatic tumor cells in mouse lungs and other organs. To illustrate the approach, we quantified lung metastasis in both spontaneous and experimental scenarios using B16F0 and B16F10 cells in C57BL/6Ncrl and NOD-Scid Gamma (NSG) mice. We tracked B16 melanoma metastasis with both bioluminescence imaging and qPCR, which were found to be self-consistent. Using this assay, we can quantitatively detect one Luc2 positive tumor cell out of 10 4 tissue cells, which corresponds to a metastatic burden of 1.8 × 10 4 metastatic cells per whole mouse lung. More importantly, the qPCR method was at least a factor of 10 more sensitive in detecting metastatic cell dissemination and should be combined with bioluminescence imaging as a high-resolution, end-point method for final metastatic cell quantitation. Given the rapid growth of primary tumors in many mouse models, assays with improved sensitivity can provide better insight into biological mechanisms that underpin tumor metastasis.

  14. Identify and Quantify the Mechanistic Sources of Sensor Performance Variation Between Individual Sensors SN1 and SN2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz, Aaron A.; Baldwin, David L.; Cinson, Anthony D.

    2014-08-06

    This Technical Letter Report satisfies the M3AR-14PN2301022 milestone, and is focused on identifying and quantifying the mechanistic sources of sensor performance variation between individual 22-element, linear phased-array sensor prototypes, SN1 and SN2. This effort constitutes an iterative evolution that supports the longer term goal of producing and demonstrating a pre-manufacturing prototype ultrasonic probe that possesses the fundamental performance characteristics necessary to enable the development of a high-temperature sodium-cooled fast reactor inspection system. The scope of the work for this portion of the PNNL effort conducted in FY14 includes performing a comparative evaluation and assessment of the performance characteristics of themore » SN1 and SN2 22 element PA-UT probes manufactured at PNNL. Key transducer performance parameters, such as sound field dimensions, resolution capabilities, frequency response, and bandwidth are used as a metric for the comparative evaluation and assessment of the SN1 and SN2 engineering test units.« less

  15. A Computational Model of the Fetal Circulation to Quantify Blood Redistribution in Intrauterine Growth Restriction

    PubMed Central

    Garcia-Canadilla, Patricia; Rudenick, Paula A.; Crispi, Fatima; Cruz-Lemini, Monica; Palau, Georgina; Camara, Oscar; Gratacos, Eduard; Bijens, Bart H.

    2014-01-01

    Intrauterine growth restriction (IUGR) due to placental insufficiency is associated with blood flow redistribution in order to maintain delivery of oxygenated blood to the brain. Given that, in the fetus the aortic isthmus (AoI) is a key arterial connection between the cerebral and placental circulations, quantifying AoI blood flow has been proposed to assess this brain sparing effect in clinical practice. While numerous clinical studies have studied this parameter, fundamental understanding of its determinant factors and its quantitative relation with other aspects of haemodynamic remodeling has been limited. Computational models of the cardiovascular circulation have been proposed for exactly this purpose since they allow both for studying the contributions from isolated parameters as well as estimating properties that cannot be directly assessed from clinical measurements. Therefore, a computational model of the fetal circulation was developed, including the key elements related to fetal blood redistribution and using measured cardiac outflow profiles to allow personalization. The model was first calibrated using patient-specific Doppler data from a healthy fetus. Next, in order to understand the contributions of the main parameters determining blood redistribution, AoI and middle cerebral artery (MCA) flow changes were studied by variation of cerebral and peripheral-placental resistances. Finally, to study how this affects an individual fetus, the model was fitted to three IUGR cases with different degrees of severity. In conclusion, the proposed computational model provides a good approximation to assess blood flow changes in the fetal circulation. The results support that while MCA flow is mainly determined by a fall in brain resistance, the AoI is influenced by a balance between increased peripheral-placental and decreased cerebral resistances. Personalizing the model allows for quantifying the balance between cerebral and peripheral-placental remodeling

  16. Analyzing complex networks evolution through Information Theory quantifiers

    NASA Astrophysics Data System (ADS)

    Carpi, Laura C.; Rosso, Osvaldo A.; Saco, Patricia M.; Ravetti, Martín Gómez

    2011-01-01

    A methodology to analyze dynamical changes in complex networks based on Information Theory quantifiers is proposed. The square root of the Jensen-Shannon divergence, a measure of dissimilarity between two probability distributions, and the MPR Statistical Complexity are used to quantify states in the network evolution process. Three cases are analyzed, the Watts-Strogatz model, a gene network during the progression of Alzheimer's disease and a climate network for the Tropical Pacific region to study the El Niño/Southern Oscillation (ENSO) dynamic. We find that the proposed quantifiers are able not only to capture changes in the dynamics of the processes but also to quantify and compare states in their evolution.

  17. A Unified Approach to Quantifying Feedbacks in Earth System Models

    NASA Astrophysics Data System (ADS)

    Taylor, K. E.

    2008-12-01

    In order to speed progress in reducing uncertainty in climate projections, the processes that most strongly influence those projections must be identified. It is of some importance, therefore, to assess the relative strengths of various climate feedbacks and to determine the degree to which various earth system models (ESMs) agree in their simulations of these processes. Climate feedbacks have been traditionally quantified in terms of their impact on the radiative balance of the planet, whereas carbon cycle responses have been assessed in terms of the size of the perturbations to the surface fluxes of carbon dioxide. In this study we introduce a diagnostic strategy for unifying the two approaches, which allows us to directly compare the strength of carbon-climate feedbacks with other conventional climate feedbacks associated with atmospheric and surface changes. Applying this strategy to a highly simplified model of the carbon-climate system demonstrates the viability of the approach. In the simple model we find that even if the strength of the carbon-climate feedbacks is very large, the uncertainty associated with the overall response of the climate system is likely to be dominated by uncertainties in the much larger feedbacks associated with clouds. This does not imply that the carbon cycle itself is unimportant, only that changes in the carbon cycle that are associated with climate change have a relatively small impact on global temperatures. This new, unified diagnostic approach is suitable for assessing feedbacks in even the most sophisticated earth system models. It will be interesting to see whether our preliminary conclusions are confirmed when output from the more realistic models is analyzed. This work was carried out at the University of California Lawrence Livermore National Laboratory under Contract W-7405-Eng-48.

  18. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model.

    PubMed

    Jurczyk, Jan; Eckrot, Alexander; Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor's behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign.

  19. Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models.

    PubMed

    Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models' with and without novel biomarkers. Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham's "general CVD risk" algorithm. The command is addpred for logistic regression models. The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers.

  20. A Paleoclimate Modeling Perspective on the Challenges to Quantifying Paleoelevation

    NASA Astrophysics Data System (ADS)

    Poulsen, C. J.; Aron, P.; Feng, R.; Fiorella, R.; Shen, H.; Skinner, C. B.

    2016-12-01

    Surface elevation is a fundamental characteristic of the land surface. Gradients in elevation associated with mountain ranges are a first order control on local and regional climate; weathering, erosion and nutrient transport; and the evolution and biodiversity of organisms. In addition, surface elevations are a proxy for the geodynamic processes that created them. Efforts to quantify paleoelevation have relied on reconstructions of mineralogical and fossil proxies that preserve environmental signals such as surface temperature, moist enthalpy, or surface water isotopic composition that have been observed to systematically vary with elevation. The challenge to estimating paleoelevation from proxies arises because the modern-day elevation dependence of these environmental parameters is not constant and has differed in the past in response to changes in both surface elevation and other climatic forcings, including greenhouse gas and orbital variations. For example, downward mixing of vapor that is isotopically enriched through troposphere warming under greenhouse forcing reduces the isotopic lapse rate. Without considering these factors, paleoelevation estimates for orogenic systems can be in error by hundreds of meters or more. Isotope-enabled climate models provide a tool for separating the climate response to these forcings into elevation and non-elevation components and for identifying the processes that alter the elevation dependence of environmental parameters. Our past and ongoing work has focused on the simulated climate response to surface uplift of the South American Andes, the North American Cordillera, and the Tibetan-Himalyan system during the Cenozoic, and its implication for interpreting proxy records from these regions. This work demonstrates that the climate response to uplift, and the implications for interpreting proxy records, varies tremendously by region. In this presentation, we synthesize climate responses to uplift across orogens, present new

  1. Inconsistent Strategies to Spin up Models in CMIP5: Implications for Ocean Biogeochemical Model Performance Assessment

    NASA Technical Reports Server (NTRS)

    Seferian, Roland; Gehlen, Marion; Bopp, Laurent; Resplandy, Laure; Orr, James C.; Marti, Olivier; Dunne, John P.; Christian, James R.; Doney, Scott C.; Ilyina, Tatiana; hide

    2015-01-01

    During the fifth phase of the Coupled Model Intercomparison Project (CMIP5) substantial efforts were made to systematically assess the skill of Earth system models. One goal was to check how realistically representative marine biogeochemical tracer distributions could be reproduced by models. In routine assessments model historical hindcasts were compared with available modern biogeochemical observations. However, these assessments considered neither how close modeled biogeochemical reservoirs were to equilibrium nor the sensitivity of model performance to initial conditions or to the spin-up protocols. Here, we explore how the large diversity in spin-up protocols used for marine biogeochemistry in CMIP5 Earth system models (ESMs) contributes to model-to-model differences in the simulated fields. We take advantage of a 500-year spin-up simulation of IPSL-CM5A-LR to quantify the influence of the spin-up protocol on model ability to reproduce relevant data fields. Amplification of biases in selected biogeochemical fields (O2, NO3, Alk-DIC) is assessed as a function of spin-up duration. We demonstrate that a relationship between spin-up duration and assessment metrics emerges from our model results and holds when confronted with a larger ensemble of CMIP5 models. This shows that drift has implications for performance assessment in addition to possibly aliasing estimates of climate change impact. Our study suggests that differences in spin-up protocols could explain a substantial part of model disparities, constituting a source of model-to- model uncertainty. This requires more attention in future model intercomparison exercises in order to provide quantitatively more correct ESM results on marine biogeochemistry and carbon cycle feedbacks.

  2. Extensions to the visual predictive check to facilitate model performance evaluation.

    PubMed

    Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert

    2008-04-01

    The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.

  3. Quantifying Performance Bias in Label Fusion

    DTIC Science & Technology

    2012-08-21

    detect ), may provide the end-user with the means to appropriately adjust the performance and optimal thresholds for performance by fusing legacy systems...boolean combination of classification systems in ROC space: An application to anomaly detection with HMMs. Pattern Recognition, 43(8), 2732-2752. 10...Shamsuddin, S. (2009). An overview of neural networks use in anomaly intrusion detection systems. Paper presented at the Research and Development (SCOReD

  4. Quantifying uncertainties in precipitation measurement

    NASA Astrophysics Data System (ADS)

    Chen, H. Z. D.

    2017-12-01

    The scientific community have a long history of utilizing precipitation data for climate model design. However, precipitation record and its model contains more uncertainty than its temperature counterpart. Literature research have shown precipitation measurements to be highly influenced by its surrounding environment, and weather stations are traditionally situated in open areas and subject to various limitations. As a result, this restriction limits the ability of the scientific community to fully close the loop on the water cycle. Horizontal redistribution have been shown to be a major factor influencing precipitation measurements. Efforts have been placed on reducing its effect on the monitoring apparatus. However, the amount of factors contributing to this uncertainty is numerous and difficult to fully capture. As a result, noise factor remains high in precipitation data. This study aims to quantify all uncertainties in precipitation data by factoring out horizontal redistribution by measuring them directly. Horizontal contribution of precipitation will be quantified by measuring precipitation at different heights, with one directly shadowing the other. The above collection represents traditional precipitation data, whereas the bottom measurements sums up the overall error term at given location. Measurements will be recorded and correlated with nearest available wind measurements to quantify its impact on traditional precipitation record. Collections at different locations will also be compared to see whether this phenomenon is location specific or if a general trend can be derived. We aim to demonstrate a new way to isolate the noise component in traditional precipitation data via empirical measurements. By doing so, improve the overall quality of historic precipitation record. As a result, provide a more accurate information for the design and calibration of large scale climate modeling.

  5. AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6

    DOE PAGES

    Collins, William J.; Lamarque, Jean -François; Schulz, Michael; ...

    2017-02-09

    The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less

  6. AerChemMIP: Quantifying the effects of chemistry and aerosols in CMIP6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William J.; Lamarque, Jean -François; Schulz, Michael

    The Aerosol Chemistry Model Intercomparison Project (AerChemMIP) is endorsed by the Coupled-Model Intercomparison Project 6 (CMIP6) and is designed to quantify the climate and air quality impacts of aerosols and chemically reactive gases. These are specifically near-term climate forcers (NTCFs: methane, tropospheric ozone and aerosols, and their precursors), nitrous oxide and ozone-depleting halocarbons. The aim of AerChemMIP is to answer four scientific questions. 1. How have anthropogenic emissions contributed to global radiative forcing and affected regional climate over the historical period? 2. How might future policies (on climate, air quality and land use) affect the abundances of NTCFs and theirmore » climate impacts? 3.How do uncertainties in historical NTCF emissions affect radiative forcing estimates? 4. How important are climate feedbacks to natural NTCF emissions, atmospheric composition, and radiative effects? These questions will be addressed through targeted simulations with CMIP6 climate models that include an interactive representation of tropospheric aerosols and atmospheric chemistry. These simulations build on the CMIP6 Diagnostic, Evaluation and Characterization of Klima (DECK) experiments, the CMIP6 historical simulations, and future projections performed elsewhere in CMIP6, allowing the contributions from aerosols and/or chemistry to be quantified. As a result, specific diagnostics are requested as part of the CMIP6 data request to highlight the chemical composition of the atmosphere, to evaluate the performance of the models, and to understand differences in behaviour between them.« less

  7. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE PAGES

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.; ...

    2016-10-20

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological

  8. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sippel, Sebastian; Lange, Holger; Mahecha, Miguel D.

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observedmore » and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. Here we demonstrate that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological

  9. Diagnosing the Dynamics of Observed and Simulated Ecosystem Gross Primary Productivity with Time Causal Information Theory Quantifiers

    PubMed Central

    Sippel, Sebastian; Mahecha, Miguel D.; Hauhs, Michael; Bodesheim, Paul; Kaminski, Thomas; Gans, Fabian; Rosso, Osvaldo A.

    2016-01-01

    Data analysis and model-data comparisons in the environmental sciences require diagnostic measures that quantify time series dynamics and structure, and are robust to noise in observational data. This paper investigates the temporal dynamics of environmental time series using measures quantifying their information content and complexity. The measures are used to classify natural processes on one hand, and to compare models with observations on the other. The present analysis focuses on the global carbon cycle as an area of research in which model-data integration and comparisons are key to improving our understanding of natural phenomena. We investigate the dynamics of observed and simulated time series of Gross Primary Productivity (GPP), a key variable in terrestrial ecosystems that quantifies ecosystem carbon uptake. However, the dynamics, patterns and magnitudes of GPP time series, both observed and simulated, vary substantially on different temporal and spatial scales. We demonstrate here that information content and complexity, or Information Theory Quantifiers (ITQ) for short, serve as robust and efficient data-analytical and model benchmarking tools for evaluating the temporal structure and dynamical properties of simulated or observed time series at various spatial scales. At continental scale, we compare GPP time series simulated with two models and an observations-based product. This analysis reveals qualitative differences between model evaluation based on ITQ compared to traditional model performance metrics, indicating that good model performance in terms of absolute or relative error does not imply that the dynamics of the observations is captured well. Furthermore, we show, using an ensemble of site-scale measurements obtained from the FLUXNET archive in the Mediterranean, that model-data or model-model mismatches as indicated by ITQ can be attributed to and interpreted as differences in the temporal structure of the respective ecological time

  10. CONFOCAL MICROSCOPY SYSTEM PERFORMANCE: FOUNDATIONS FOR QUANTIFYING CYTOMETRIC APPLICATIONS WITH SPECTROSCOPIC INSTRUMENTS

    EPA Science Inventory

    The confocal laser-scanning microscopy (CLSM) has enormous potential in many biological fields. The goal of a CLSM is to acquire and quantify fluorescence and in some instruments acquire spectral characterization of the emitted signal. The accuracy of these measurements demands t...

  11. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiser, I; Lu, Z

    2014-06-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions includedmore » two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task.« less

  12. Micro-CT image-derived metrics quantify arterial wall distensibility reduction in a rat model of pulmonary hypertension

    NASA Astrophysics Data System (ADS)

    Johnson, Roger H.; Karau, Kelly L.; Molthen, Robert C.; Haworth, Steven T.; Dawson, Christopher A.

    2000-04-01

    We developed methods to quantify arterial structural and mechanical properties in excised rat lungs and applied them to investigate the distensibility decrease accompanying chronic hypoxia-induced pulmonary hypertension. Lungs of control and hypertensive (three weeks 11% O2) animals were excised and a contrast agent introduced before micro-CT imaging with a special purpose scanner. For each lung, four 3D image data sets were obtained, each at a different intra-arterial contrast agent pressure. Vessel segment diameters and lengths were measured at all levels in the arterial tree hierarchy, and these data used to generate features sensitive to distensibility changes. Results indicate that measurements obtained from 3D micro-CT images can be used to quantify vessel biomechanical properties in this rat model of pulmonary hypertension and that distensibility is reduced by exposure to chronic hypoxia. Mechanical properties can be assessed in a localized fashion and quantified in a spatially-resolved way or as a single parameter describing the tree as a whole. Micro-CT is a nondestructive way to rapidly assess structural and mechanical properties of arteries in small animal organs maintained in a physiological state. Quantitative features measured by this method may provide valuable insights into the mechanisms causing the elevated pressures in pulmonary hypertension of differing etiologies and should become increasingly valuable tools in the study of complex phenotypes in small-animal models of important diseases such as hypertension.

  13. Quantifying camouflage: how to predict detectability from appearance.

    PubMed

    Troscianko, Jolyon; Skelhorn, John; Stevens, Martin

    2017-01-06

    Quantifying the conspicuousness of objects against particular backgrounds is key to understanding the evolution and adaptive value of animal coloration, and in designing effective camouflage. Quantifying detectability can reveal how colour patterns affect survival, how animals' appearances influence habitat preferences, and how receiver visual systems work. Advances in calibrated digital imaging are enabling the capture of objective visual information, but it remains unclear which methods are best for measuring detectability. Numerous descriptions and models of appearance have been used to infer the detectability of animals, but these models are rarely empirically validated or directly compared to one another. We compared the performance of human 'predators' to a bank of contemporary methods for quantifying the appearance of camouflaged prey. Background matching was assessed using several established methods, including sophisticated feature-based pattern analysis, granularity approaches and a range of luminance and contrast difference measures. Disruptive coloration is a further camouflage strategy where high contrast patterns disrupt they prey's tell-tale outline, making it more difficult to detect. Disruptive camouflage has been studied intensely over the past decade, yet defining and measuring it have proven far more problematic. We assessed how well existing disruptive coloration measures predicted capture times. Additionally, we developed a new method for measuring edge disruption based on an understanding of sensory processing and the way in which false edges are thought to interfere with animal outlines. Our novel measure of disruptive coloration was the best predictor of capture times overall, highlighting the importance of false edges in concealment over and above pattern or luminance matching. The efficacy of our new method for measuring disruptive camouflage together with its biological plausibility and computational efficiency represents a substantial

  14. Stata Modules for Calculating Novel Predictive Performance Indices for Logistic Models

    PubMed Central

    Barkhordari, Mahnaz; Padyab, Mojgan; Hadaegh, Farzad; Azizi, Fereidoun; Bozorgmanesh, Mohammadreza

    2016-01-01

    Background Prediction is a fundamental part of prevention of cardiovascular diseases (CVD). The development of prediction algorithms based on the multivariate regression models loomed several decades ago. Parallel with predictive models development, biomarker researches emerged in an impressively great scale. The key question is how best to assess and quantify the improvement in risk prediction offered by new biomarkers or more basically how to assess the performance of a risk prediction model. Discrimination, calibration, and added predictive value have been recently suggested to be used while comparing the predictive performances of the predictive models’ with and without novel biomarkers. Objectives Lack of user-friendly statistical software has restricted implementation of novel model assessment methods while examining novel biomarkers. We intended, thus, to develop a user-friendly software that could be used by researchers with few programming skills. Materials and Methods We have written a Stata command that is intended to help researchers obtain cut point-free and cut point-based net reclassification improvement index and (NRI) and relative and absolute Integrated discriminatory improvement index (IDI) for logistic-based regression analyses.We applied the commands to a real data on women participating the Tehran lipid and glucose study (TLGS) to examine if information of a family history of premature CVD, waist circumference, and fasting plasma glucose can improve predictive performance of the Framingham’s “general CVD risk” algorithm. Results The command is addpred for logistic regression models. Conclusions The Stata package provided herein can encourage the use of novel methods in examining predictive capacity of ever-emerging plethora of novel biomarkers. PMID:27279830

  15. Quantifying Systemic Risk by Solutions of the Mean-Variance Risk Model

    PubMed Central

    Morgenstern, Ingo

    2016-01-01

    The world is still recovering from the financial crisis peaking in September 2008. The triggering event was the bankruptcy of Lehman Brothers. To detect such turmoils, one can investigate the time-dependent behaviour of correlations between assets or indices. These cross-correlations have been connected to the systemic risks within markets by several studies in the aftermath of this crisis. We study 37 different US indices which cover almost all aspects of the US economy and show that monitoring an average investor’s behaviour can be used to quantify times of increased risk. In this paper the overall investing strategy is approximated by the ground-states of the mean-variance model along the efficient frontier bound to real world constraints. Changes in the behaviour of the average investor is utlilized as a early warning sign. PMID:27351482

  16. Performance monitoring can boost turboexpander efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntire, R.

    1982-07-05

    This paper discusses ways of improving the productivity of the turboexpander/refrigeration system's radial expander and radial compressor through systematic review of component performance. It reviews several techniques to determine the performance of an expander and compressor. It suggests that any performance improvement program requires quantifying the performance of separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. The model is used to quantify the economic benefits of any change in the system, eithermore » a change in operating procedures or a hardware modification. Topics include proper ways of using antisurge control valves and modifying flow rate/shaft speed (Q/N). It is noted that compressor efficiency depends on the incidence angle of blade at the rotor leading edge and the angle of the incoming gas stream.« less

  17. A Bayesian model for quantifying the change in mortality associated with future ozone exposures under climate change.

    PubMed

    Alexeeff, Stacey E; Pfister, Gabriele G; Nychka, Doug

    2016-03-01

    Climate change is expected to have many impacts on the environment, including changes in ozone concentrations at the surface level. A key public health concern is the potential increase in ozone-related summertime mortality if surface ozone concentrations rise in response to climate change. Although ozone formation depends partly on summertime weather, which exhibits considerable inter-annual variability, previous health impact studies have not incorporated the variability of ozone into their prediction models. A major source of uncertainty in the health impacts is the variability of the modeled ozone concentrations. We propose a Bayesian model and Monte Carlo estimation method for quantifying health effects of future ozone. An advantage of this approach is that we include the uncertainty in both the health effect association and the modeled ozone concentrations. Using our proposed approach, we quantify the expected change in ozone-related summertime mortality in the contiguous United States between 2000 and 2050 under a changing climate. The mortality estimates show regional patterns in the expected degree of impact. We also illustrate the results when using a common technique in previous work that averages ozone to reduce the size of the data, and contrast these findings with our own. Our analysis yields more realistic inferences, providing clearer interpretation for decision making regarding the impacts of climate change. © 2015, The International Biometric Society.

  18. Using the Enhanced Daily Load Stimulus Model to Quantify the Mechanical Load and Bone Mineral Density Changes Experienced by Crew Members on the International Space Station

    NASA Technical Reports Server (NTRS)

    Genc, K. O.; Gopalakrishnan, R.; Kuklis, M. M.; Maender, C. C.; Rice, A. J.; Cavanagh, P. R.

    2009-01-01

    Despite the use of exercise countermeasures during long-duration space missions, bone mineral density (BMD) and predicted bone strength of astronauts continue to show decreases in the lower extremities and spine. This site-specific bone adaptation is most likely caused by the effects of microgravity on the mechanical loading environment of the crew member. There is, therefore, a need to quantify the mechanical loading experienced on Earth and on-orbit to define the effect of a given "dose" of loading on bone homeostasis. Gene et al. recently proposed an enhanced DLS (EDLS) model that, when used with entire days of in-shoe forces, takes into account recently developed theories on the importance of factors such as saturation, recovery, and standing and their effects on the osteogenic response of bone to daily physical activity. This algorithm can also quantify the tinting and type of activity (sit/unload, stand, walk, run or other loaded activity) performed throughout the day. The purpose of the current study was to use in-shoe force measurements from entire typical work days on Earth and on-orbit in order to quantify the type and amount of loading experienced by crew members. The specific aim was to use these measurements as inputs into the EDLS model to determine activity timing/type and the mechanical "dose" imparted on the musculoskeletal system of crew members and relate this dose to changes in bone homeostasis.

  19. High-performance liquid chromatography analysis methods developed for quantifying enzymatic esterification of flavonoids in ionic liquids.

    PubMed

    Lue, Bena-Marie; Guo, Zheng; Xu, Xuebing

    2008-07-11

    Methods using reversed-phase high-performance liquid chromatography (RP-HPLC) with ELSD were investigated to quantify enzymatic reactions of flavonoids with fatty acids in the presence of diverse room temperature ionic liquids (RTILs). A buffered salt (preferably triethylamine-acetate) was found essential for separation of flavonoids from strongly polar RTILs, whereby RTILs were generally visible as two major peaks identified based on an ion-pairing/exchanging hypothesis. C8 and C12 stationary phases were optimal while mobile phase pH (3-7) had only a minor influence on separation. The method developed was successfully applied for primary screening of RTILs (>20), with in depth evaluation of substrates in 10 RTILs, for their evaluation as reaction media.

  20. Quantifying the Incoming Jet Past Heart Valve Prostheses Using Vortex Formation Dynamics

    NASA Astrophysics Data System (ADS)

    Pierrakos, Olga

    2005-11-01

    Heart valve (HV) replacement prostheses are associated with hemodynamic compromises compared to their native counterparts. Traditionally, HV performance and hemodynamics have been quantified using effective orifice size and pressure gradients. However, quality and direction of flow are also important aspects of HV function and relate to HV design, implantation technique, and orientation. The flow past any HV is governed by the generation of shear layers followed by the formation and shedding of organized flow structures in the form of vortex rings (VR). For the first time, vortex formation (VF) in the LV is quantified. Vortex energy measurements allow for calculation of the critical formation number (FN), which is the time at which the VR reaches its maximum strength. Inefficiencies in HV function result in critical FN decrease. This study uses the concept of FN to compare mitral HV prostheses in an in-vitro model (a silicone LV model housed in a piston-driven heart simulator) using Time-resolved Digital Particle Image Velocimetry. Two HVs were studied: a porcine HV and bileaflet MHV, which was tested in an anatomic and non-anatomic orientation. The results suggest that HV orientation and design affect the critical FN. We propose that the critical FN, which is contingent on the HV design, orientation, and physical flow characteristics, serve as a parameter to quantify the incoming jet and the efficiency of the HV.

  1. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model.

    PubMed

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-08-16

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target.

  2. Quantifying the evolution of flow boiling bubbles by statistical testing and image analysis: toward a general model

    PubMed Central

    Xiao, Qingtai; Xu, Jianxin; Wang, Hua

    2016-01-01

    A new index, the estimate of the error variance, which can be used to quantify the evolution of the flow patterns when multiphase components or tracers are difficultly distinguishable, was proposed. The homogeneity degree of the luminance space distribution behind the viewing windows in the direct contact boiling heat transfer process was explored. With image analysis and a linear statistical model, the F-test of the statistical analysis was used to test whether the light was uniform, and a non-linear method was used to determine the direction and position of a fixed source light. The experimental results showed that the inflection point of the new index was approximately equal to the mixing time. The new index has been popularized and applied to a multiphase macro mixing process by top blowing in a stirred tank. Moreover, a general quantifying model was introduced for demonstrating the relationship between the flow patterns of the bubble swarms and heat transfer. The results can be applied to investigate other mixing processes that are very difficult to recognize the target. PMID:27527065

  3. Quantifying Variation in Gait Features from Wearable Inertial Sensors Using Mixed Effects Models

    PubMed Central

    Cresswell, Kellen Garrison; Shin, Yongyun; Chen, Shanshan

    2017-01-01

    The emerging technology of wearable inertial sensors has shown its advantages in collecting continuous longitudinal gait data outside laboratories. This freedom also presents challenges in collecting high-fidelity gait data. In the free-living environment, without constant supervision from researchers, sensor-based gait features are susceptible to variation from confounding factors such as gait speed and mounting uncertainty, which are challenging to control or estimate. This paper is one of the first attempts in the field to tackle such challenges using statistical modeling. By accepting the uncertainties and variation associated with wearable sensor-based gait data, we shift our efforts from detecting and correcting those variations to modeling them statistically. From gait data collected on one healthy, non-elderly subject during 48 full-factorial trials, we identified four major sources of variation, and quantified their impact on one gait outcome—range per cycle—using a random effects model and a fixed effects model. The methodology developed in this paper lays the groundwork for a statistical framework to account for sources of variation in wearable gait data, thus facilitating informative statistical inference for free-living gait analysis. PMID:28245602

  4. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  5. Modeling the Energy Performance of LoRaWAN

    PubMed Central

    2017-01-01

    LoRaWAN is a flagship Low-Power Wide Area Network (LPWAN) technology that has highly attracted much attention from the community in recent years. Many LoRaWAN end-devices, such as sensors or actuators, are expected not to be powered by the electricity grid; therefore, it is crucial to investigate the energy consumption of LoRaWAN. However, published works have only focused on this topic to a limited extent. In this paper, we present analytical models that allow the characterization of LoRaWAN end-device current consumption, lifetime and energy cost of data delivery. The models, which have been derived based on measurements on a currently prevalent LoRaWAN hardware platform, allow us to quantify the impact of relevant physical and Medium Access Control (MAC) layer LoRaWAN parameters and mechanisms, as well as Bit Error Rate (BER) and collisions, on energy performance. Among others, evaluation results show that an appropriately configured LoRaWAN end-device platform powered by a battery of 2400 mAh can achieve a 1-year lifetime while sending one message every 5 min, and an asymptotic theoretical lifetime of 6 years for infrequent communication. PMID:29035347

  6. Modeling the Energy Performance of LoRaWAN.

    PubMed

    Casals, Lluís; Mir, Bernat; Vidal, Rafael; Gomez, Carles

    2017-10-16

    LoRaWAN is a flagship Low-Power Wide Area Network (LPWAN) technology that has highly attracted much attention from the community in recent years. Many LoRaWAN end-devices, such as sensors or actuators, are expected not to be powered by the electricity grid; therefore, it is crucial to investigate the energy consumption of LoRaWAN. However, published works have only focused on this topic to a limited extent. In this paper, we present analytical models that allow the characterization of LoRaWAN end-device current consumption, lifetime and energy cost of data delivery. The models, which have been derived based on measurements on a currently prevalent LoRaWAN hardware platform, allow us to quantify the impact of relevant physical and Medium Access Control (MAC) layer LoRaWAN parameters and mechanisms, as well as Bit Error Rate (BER) and collisions, on energy performance. Among others, evaluation results show that an appropriately configured LoRaWAN end-device platform powered by a battery of 2400 mAh can achieve a 1-year lifetime while sending one message every 5 min, and an asymptotic theoretical lifetime of 6 years for infrequent communication.

  7. APG: an Active Protein-Gene network model to quantify regulatory signals in complex biological systems.

    PubMed

    Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan

    2013-01-01

    Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information.

  8. APG: an Active Protein-Gene Network Model to Quantify Regulatory Signals in Complex Biological Systems

    PubMed Central

    Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan

    2013-01-01

    Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in complex biological systems. In this work, we develop a novel graphical model, called Active Protein-Gene (APG) network model, to quantify regulatory signals of transcription in complex biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this model to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological experiments validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG model provides theoretical basis to quantitatively elucidate transcriptional regulation by modelling TF combinatorial interactions and exploiting multilevel high-throughput information. PMID:23346354

  9. Quantifying the Global Nitrous Oxide Emissions Using a Trait-based Biogeochemistry Model

    NASA Astrophysics Data System (ADS)

    Zhuang, Q.; Yu, T.

    2017-12-01

    Nitrogen is an essential element for the global biogeochemical cycle. It is a key nutrient for organisms and N compounds including nitrous oxide significantly influence the global climate. The activities of bacteria and archaea are responsible for the nitrification and denitrification in a wide variety of environments, so microbes play an important role in the nitrogen cycle in soils. To date, most existing process-based models treated nitrification and denitrification as chemical reactions driven by soil physical variables including soil temperature and moisture. In general, the effect of microbes on N cycling has not been modeled in sufficient details. Soil organic carbon also affects the N cycle because it supplies energy to microbes. In my study, a trait-based biogeochemistry model quantifying N2O emissions from the terrestrial ecosystems is developed based on an extant process-based model TEM (Terrestrial Ecosystem Model). Specifically, the improvement to TEM includes: 1) Incorporating the N fixation process to account for the inflow of N from the atmosphere to biosphere; 2) Implementing the effects of microbial dynamics on nitrification process; 3) fully considering the effects of carbon cycling on N nitrogen cycling following the principles of stoichiometry of carbon and nitrogen in soils, plants, and microbes. The difference between simulations with and without the consideration of bacterial activity lies between 5% 25% based on climate conditions and vegetation types. The trait based module allows a more detailed estimation of global N2O emissions.

  10. Quantifying Biofilm in Porous Media Using Rock Physics Models

    NASA Astrophysics Data System (ADS)

    Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

    2012-12-01

    Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include

  11. Quantifying Forest Ecosystem Services Tradeoff—Coupled Ecological and Economic Models

    NASA Astrophysics Data System (ADS)

    Haff, P. K.; Ling, P. Y.

    2015-12-01

    Quantification of the effect of carbon-related forestland management activities on ecosystem services is difficult, because knowledge about the dynamics of coupled social-ecological systems is lacking. Different forestland management activities, such as various amount, timing, and methods of harvesting, and natural disturbances events, such as wind and fires, create shocks and uncertainties to the forest carbon dynamics. A spatially explicit model, Landis-ii, was used to model the forest succession for different harvest management scenarios at the Grandfather District, North Carolina. In addition to harvest, the model takes into account of the impact of natural disturbances, such as fire and insects, and species competition. The result shows the storage of carbon in standing biomass and in wood product for each species for each scenario. In this study, optimization is used to analyze the maximum profit and the number of tree species that each forest landowner can gain at different prices of carbon, roundwood, and interest rates for different harvest management scenarios. Time series of roundwood production of different types were estimated using remote sensing data. Econometric analysis is done to understand the possible interaction and relations between the production of different types of roundwood and roundwood prices, which can indicate the possible planting scheme that a forest owner may make. This study quantifies the tradeoffs between carbon sequestration, roundwood production, and forest species diversity not only from an economic perspective, but also takes into account of the forest succession mechanism in a species-diverse region. The resulting economic impact on the forest landowners is likely to influence their future planting decision, which in turn, will influence the species composition and future revenue of the landowners.

  12. Quantifying the dilution effect for models in ecological epidemiology.

    PubMed

    Roberts, M G; Heesterbeek, J A P

    2018-03-01

    The dilution effect , where an increase in biodiversity results in a reduction in the prevalence of an infectious disease, has been the subject of speculation and controversy. Conversely, an amplification effect occurs when increased biodiversity is related to an increase in prevalence. We explore the conditions under which these effects arise, using multi species compartmental models that integrate ecological and epidemiological interactions. We introduce three potential metrics for quantifying dilution and amplification, one based on infection prevalence in a focal host species, one based on the size of the infected subpopulation of that species and one based on the basic reproduction number. We introduce our approach in the simplest epidemiological setting with two species, and show that the existence and strength of a dilution effect is influenced strongly by the choices made to describe the system and the metric used to gauge the effect. We show that our method can be generalized to any number of species and to more complicated ecological and epidemiological dynamics. Our method allows a rigorous analysis of ecological systems where dilution effects have been postulated, and contributes to future progress in understanding the phenomenon of dilution in the context of infectious disease dynamics and infection risk. © 2018 The Author(s).

  13. Quantifying predictability variations in a low-order ocean-atmosphere model - A dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.; Dutton, John A.

    1993-01-01

    The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.

  14. Performance assessment of geospatial simulation models of land-use change--a landscape metric-based approach.

    PubMed

    Sakieh, Yousef; Salmanmahiny, Abdolrassoul

    2016-03-01

    Performance evaluation is a critical step when developing land-use and cover change (LUCC) models. The present study proposes a spatially explicit model performance evaluation method, adopting a landscape metric-based approach. To quantify GEOMOD model performance, a set of composition- and configuration-based landscape metrics including number of patches, edge density, mean Euclidean nearest neighbor distance, largest patch index, class area, landscape shape index, and splitting index were employed. The model takes advantage of three decision rules including neighborhood effect, persistence of change direction, and urbanization suitability values. According to the results, while class area, largest patch index, and splitting indices demonstrated insignificant differences between spatial pattern of ground truth and simulated layers, there was a considerable inconsistency between simulation results and real dataset in terms of the remaining metrics. Specifically, simulation outputs were simplistic and the model tended to underestimate number of developed patches by producing a more compact landscape. Landscape-metric-based performance evaluation produces more detailed information (compared to conventional indices such as the Kappa index and overall accuracy) on the model's behavior in replicating spatial heterogeneity features of a landscape such as frequency, fragmentation, isolation, and density. Finally, as the main characteristic of the proposed method, landscape metrics employ the maximum potential of observed and simulated layers for a performance evaluation procedure, provide a basis for more robust interpretation of a calibration process, and also deepen modeler insight into the main strengths and pitfalls of a specific land-use change model when simulating a spatiotemporal phenomenon.

  15. Quantifying uncertainties of permafrost carbon-climate feedbacks

    NASA Astrophysics Data System (ADS)

    Burke, Eleanor J.; Ekici, Altug; Huang, Ye; Chadburn, Sarah E.; Huntingford, Chris; Ciais, Philippe; Friedlingstein, Pierre; Peng, Shushi; Krinner, Gerhard

    2017-06-01

    The land surface models JULES (Joint UK Land Environment Simulator, two versions) and ORCHIDEE-MICT (Organizing Carbon and Hydrology in Dynamic Ecosystems), each with a revised representation of permafrost carbon, were coupled to the Integrated Model Of Global Effects of climatic aNomalies (IMOGEN) intermediate-complexity climate and ocean carbon uptake model. IMOGEN calculates atmospheric carbon dioxide (CO2) and local monthly surface climate for a given emission scenario with the land-atmosphere CO2 flux exchange from either JULES or ORCHIDEE-MICT. These simulations include feedbacks associated with permafrost carbon changes in a warming world. Both IMOGEN-JULES and IMOGEN-ORCHIDEE-MICT were forced by historical and three alternative future-CO2-emission scenarios. Those simulations were performed for different climate sensitivities and regional climate change patterns based on 22 different Earth system models (ESMs) used for CMIP3 (phase 3 of the Coupled Model Intercomparison Project), allowing us to explore climate uncertainties in the context of permafrost carbon-climate feedbacks. Three future emission scenarios consistent with three representative concentration pathways were used: RCP2.6, RCP4.5 and RCP8.5. Paired simulations with and without frozen carbon processes were required to quantify the impact of the permafrost carbon feedback on climate change. The additional warming from the permafrost carbon feedback is between 0.2 and 12 % of the change in the global mean temperature (ΔT) by the year 2100 and 0.5 and 17 % of ΔT by 2300, with these ranges reflecting differences in land surface models, climate models and emissions pathway. As a percentage of ΔT, the permafrost carbon feedback has a greater impact on the low-emissions scenario (RCP2.6) than on the higher-emissions scenarios, suggesting that permafrost carbon should be taken into account when evaluating scenarios of heavy mitigation and stabilization. Structural differences between the land

  16. Quantifying Uncertainty in Model Predictions for the Pliocene (Plio-QUMP): Initial results

    USGS Publications Warehouse

    Pope, J.O.; Collins, M.; Haywood, A.M.; Dowsett, H.J.; Hunter, S.J.; Lunt, D.J.; Pickering, S.J.; Pound, M.J.

    2011-01-01

    Examination of the mid-Pliocene Warm Period (mPWP; ~. 3.3 to 3.0. Ma BP) provides an excellent opportunity to test the ability of climate models to reproduce warm climate states, thereby assessing our confidence in model predictions. To do this it is necessary to relate the uncertainty in model simulations of mPWP climate to uncertainties in projections of future climate change. The uncertainties introduced by the model can be estimated through the use of a Perturbed Physics Ensemble (PPE). Developing on the UK Met Office Quantifying Uncertainty in Model Predictions (QUMP) Project, this paper presents the results from an initial investigation using the end members of a PPE in a fully coupled atmosphere-ocean model (HadCM3) running with appropriate mPWP boundary conditions. Prior work has shown that the unperturbed version of HadCM3 may underestimate mPWP sea surface temperatures at higher latitudes. Initial results indicate that neither the low sensitivity nor the high sensitivity simulations produce unequivocally improved mPWP climatology relative to the standard. Whilst the high sensitivity simulation was able to reconcile up to 6 ??C of the data/model mismatch in sea surface temperatures in the high latitudes of the Northern Hemisphere (relative to the standard simulation), it did not produce a better prediction of global vegetation than the standard simulation. Overall the low sensitivity simulation was degraded compared to the standard and high sensitivity simulations in all aspects of the data/model comparison. The results have shown that a PPE has the potential to explore weaknesses in mPWP modelling simulations which have been identified by geological proxies, but that a 'best fit' simulation will more likely come from a full ensemble in which simulations that contain the strengths of the two end member simulations shown here are combined. ?? 2011 Elsevier B.V.

  17. A mass-balance model to separate and quantify colloidal and solute redistributions in soil

    USGS Publications Warehouse

    Bern, C.R.; Chadwick, O.A.; Hartshorn, A.S.; Khomo, L.M.; Chorover, J.

    2011-01-01

    Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

  18. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  19. Evaluation of three inverse problem models to quantify skin microcirculation using diffusion-weighted MRI

    NASA Astrophysics Data System (ADS)

    Cordier, G.; Choi, J.; Raguin, L. G.

    2008-11-01

    Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.

  20. Quantifying Post- Laser Ablation Prostate Therapy Changes on MRI via a Domain-Specific Biomechanical Model: Preliminary Findings

    PubMed Central

    Toth, Robert; Sperling, Dan; Madabhushi, Anant

    2016-01-01

    Focal laser ablation destroys cancerous cells via thermal destruction of tissue by a laser. Heat is absorbed, causing thermal necrosis of the target region. It combines the aggressive benefits of radiation treatment (destroying cancer cells) without the harmful side effects (due to its precise localization). MRI is typically used pre-treatment to determine the targeted area, and post-treatment to determine efficacy by detecting necrotic tissue, or tumor recurrence. However, no system exists to quantitatively evaluate the post-treatment effects on the morphology and structure via MRI. To quantify these changes, the pre- and post-treatment MR images must first be spatially aligned. The goal is to quantify (a) laser-induced shape-based changes, and (b) changes in MRI parameters post-treatment. The shape-based changes may be correlated with treatment efficacy, and the quantitative effects of laser treatment over time is currently poorly understood. This work attempts to model changes in gland morphology following laser treatment due to (1) patient alignment, (2) changes due to surrounding organs such as the bladder and rectum, and (3) changes due to the treatment itself. To isolate the treatment-induced shape-based changes, the changes from (1) and (2) are first modeled and removed using a finite element model (FEM). A FEM models the physical properties of tissue. The use of a physical biomechanical model is important since a stated goal of this work is to determine the physical shape-based changes to the prostate from the treatment, and therefore only physical real deformations are to be allowed. A second FEM is then used to isolate the physical, shape-based, treatment-induced changes. We applied and evaluated our model in capturing the laser induced changes to the prostate morphology on eight patients with 3.0 Tesla, T2-weighted MRI, acquired approximately six months following treatment. Our results suggest the laser treatment causes a decrease in prostate volume

  1. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeffrey C. JOe; Ronald L. Boring

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understandmore » from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.« less

  2. Quantifying the ice-albedo feedback through decoupling

    NASA Astrophysics Data System (ADS)

    Kravitz, B.; Rasch, P. J.

    2017-12-01

    The ice-albedo feedback involves numerous individual components, whereby warming induces sea ice melt, inducing reduced surface albedo, inducing increased surface shortwave absorption, causing further warming. Here we attempt to quantify the sea ice albedo feedback using an analogue of the "partial radiative perturbation" method, but where the governing mechanisms are directly decoupled in a climate model. As an example, we can isolate the insulating effects of sea ice on surface energy and moisture fluxes by allowing sea ice thickness to change but fixing Arctic surface albedo, or vice versa. Here we present results from such idealized simulations using the Community Earth System Model in which individual components are successively fixed, effectively decoupling the ice-albedo feedback loop. We isolate the different components of this feedback, including temperature change, sea ice extent/thickness, and air-sea exchange of heat and moisture. We explore the interactions between these different components, as well as the strengths of the total feedback in the decoupled feedback loop, to quantify contributions from individual pieces. We also quantify the non-additivity of the effects of the components as a means of investigating the dominant sources of nonlinearity in the ice-albedo feedback.

  3. On the use of musculoskeletal models to interpret motor control strategies from performance data

    NASA Astrophysics Data System (ADS)

    Cheng, Ernest J.; Loeb, Gerald E.

    2008-06-01

    The intrinsic viscoelastic properties of muscle are central to many theories of motor control. Much of the debate over these theories hinges on varying interpretations of these muscle properties. In the present study, we describe methods whereby a comprehensive musculoskeletal model can be used to make inferences about motor control strategies that would account for behavioral data. Muscle activity and kinematic data from a monkey were recorded while the animal performed a single degree-of-freedom pointing task in the presence of pseudo-random torque perturbations. The monkey's movements were simulated by a musculoskeletal model with accurate representations of musculotendon morphometry and contractile properties. The model was used to quantify the impedance of the limb while moving rapidly, the differential action of synergistic muscles, the relative contribution of reflexes to task performance and the completeness of recorded EMG signals. Current methods to address these issues in the absence of musculoskeletal models were compared with the methods used in the present study. We conclude that musculoskeletal models and kinetic analysis can improve the interpretation of kinematic and electrophysiological data, in some cases by illuminating shortcomings of the experimental methods or underlying assumptions that may otherwise escape notice.

  4. Quantifying Scheduling Challenges for Exascale System Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solvingmore » scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.« less

  5. Quantifying the net social benefits of vehicle trip reductions : guidance for customizing the TRIMMS model, final draft report, June 2009.

    DOT National Transportation Integrated Search

    2009-04-01

    This study details the development of a series of enhancements to the Trip Reduction Impacts of : Mobility Management Strategies (TRIMMS) model. TRIMMS allows quantifying the net social : benefits of a wide range of transportation demand management...

  6. Developing Data-driven models for quantifying Cochlodinium polykrikoides in Coastal Waters

    NASA Astrophysics Data System (ADS)

    Kwon, Yongsung; Jang, Eunna; Im, Jungho; Baek, Seungho; Park, Yongeun; Cho, Kyunghwa

    2017-04-01

    Harmful algal blooms have been worldwide problems because it leads to serious dangers to human health and aquatic ecosystems. Especially, fish killing red tide blooms by one of dinoflagellate, Cochlodinium polykrikoides (C. polykrikoides), have caused critical damage to mariculture in the Korean coastal waters. In this work, multiple linear regression (MLR), regression tree (RT), and random forest (RF) models were constructed and applied to estimate C. polykrikoides blooms in coastal waters. Five different types of input dataset were carried out to test the performance of three models. To train and validate the three models, observed number of C. polykrikoides cells from National institute of fisheries science (NIFS) and remote sensing reflectance data from Geostationary Ocean Color Imager (GOCI) images for 3 years from 2013 to 2015 were used. The RT model showed the best prediction performance when using 4 bands and 3 band ratios data were used as input data simultaneously. Results obtained from iterative model development with randomly chosen input data indicated that the recognition of patterns in training data caused a variation in prediction performance. This work provided useful tools for reliably estimate the number of C. polykrikoides cells by using reasonable input reflectance dataset in coastal waters. It is expected that the RT model is easily accessed and manipulated by administrators and decision-makers working with coastal waters.

  7. Quantifying the Performances of DFT for Predicting Vibrationally Resolved Optical Spectra: Asymmetric Fluoroborate Dyes as Working Examples.

    PubMed

    Bednarska, Joanna; Zaleśny, Robert; Bartkowiak, Wojciech; Ośmiałowski, Borys; Medved', Miroslav; Jacquemin, Denis

    2017-09-12

    This article aims at a quantitative assessment of the performances of a panel of exchange-correlation functionals, including semilocal (BLYP and PBE), global hybrids (B3LYP, PBE0, M06, BHandHLYP, M06-2X, and M06-HF), and range-separated hybrids (CAM-B3LYP, LC-ωPBE, LC-BLYP, ωB97X, and ωB97X-D), in predicting the vibrationally resolved absorption spectra of BF 2 -carrying compounds. To this end, for 19 difluoroborates as examples, we use, as a metric, the vibrational reorganization energy (λ vib ) that can be determined based on the computationally efficient linear coupling model (a.k.a. vertical gradient method). The reference values of λ vib were determined by employing the CC2 method combined with the cc-pVTZ basis set for a representative subset of molecules. To validate the performances of CC2, comparisons with experimental data have been carried out as well. This study shows that the vibrational reorganization energy, involving Huang-Rhys factors and normal-mode frequencies, can indeed be used to quantify the reliability of functionals in the calculations of the vibrational fine structure of absorption bands, i.e., an accurate prediction of the vibrational reorganization energy leads to absorption band shapes better fitting the selected reference. The CAM-B3LYP, M06-2X, ωB97X-D, ωB97X, and BHandHLYP functionals all deliver vibrational reorganization energies with absolute relative errors smaller than 20% compared to CC2, whereas 10% accuracy can be achieved with the first three functionals. Indeed, the set of examined exchange-correlation functionals can be divided into three groups: (i) BLYP, B3LYP, PBE, PBE0, and M06 yield inaccurate band shapes (λ vib,TDDFT < λ vib,CC2 ), (ii) BHandHLYP, CAM-B3LYP, M06-2X, ωB97X, and ωB97X-D provide accurate band shapes (λ vib,TDDFT ≈ λ vib,CC2 ), and (iii) LC-ωPBE, LC-BLYP, and M06-HF deliver rather poor band topologies (λ vib,TDDFT > λ vib,CC2 ). This study also demonstrates that λ vib can be reliably

  8. Railroad Performance Model

    DOT National Transportation Integrated Search

    1977-10-01

    This report describes an operational, though preliminary, version of the Railroad Performance Model, which is a computer simulation model of the nation's railroad system. The ultimate purpose of this model is to predict the effect of changes in gover...

  9. Quantifying uncertainty in climate change science through empirical information theory.

    PubMed

    Majda, Andrew J; Gershgorin, Boris

    2010-08-24

    Quantifying the uncertainty for the present climate and the predictions of climate change in the suite of imperfect Atmosphere Ocean Science (AOS) computer models is a central issue in climate change science. Here, a systematic approach to these issues with firm mathematical underpinning is developed through empirical information theory. An information metric to quantify AOS model errors in the climate is proposed here which incorporates both coarse-grained mean model errors as well as covariance ratios in a transformation invariant fashion. The subtle behavior of model errors with this information metric is quantified in an instructive statistically exactly solvable test model with direct relevance to climate change science including the prototype behavior of tracer gases such as CO(2). Formulas for identifying the most sensitive climate change directions using statistics of the present climate or an AOS model approximation are developed here; these formulas just involve finding the eigenvector associated with the largest eigenvalue of a quadratic form computed through suitable unperturbed climate statistics. These climate change concepts are illustrated on a statistically exactly solvable one-dimensional stochastic model with relevance for low frequency variability of the atmosphere. Viable algorithms for implementation of these concepts are discussed throughout the paper.

  10. Quantifying falsifiability of scientific theories

    NASA Astrophysics Data System (ADS)

    Nemenman, Ilya

    I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.

  11. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  12. Clinical laboratory as an economic model for business performance analysis.

    PubMed

    Buljanović, Vikica; Patajac, Hrvoje; Petrovecki, Mladen

    2011-08-15

    To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by implementing changes in the next fiscal

  13. The use of modeling and suspended sediment concentration measurements for quantifying net suspended sediment transport through a large tidally dominated inlet

    USGS Publications Warehouse

    Erikson, Li H.; Wright, Scott A.; Elias, Edwin; Hanes, Daniel M.; Schoellhamer, David H.; Largier, John; Barnard, P.L.; Jaffee, B.E.; Schoellhamer, D.H.

    2013-01-01

    Sediment exchange at large energetic inlets is often difficult to quantify due complex flows, massive amounts of water and sediment exchange, and environmental conditions limiting long-term data collection. In an effort to better quantify such exchange this study investigated the use of suspended sediment concentrations (SSC) measured at an offsite location as a surrogate for sediment exchange at the tidally dominated Golden Gate inlet in San Francisco, CA. A numerical model was calibrated and validated against water and suspended sediment flux measured during a spring–neap tide cycle across the Golden Gate. The model was then run for five months and net exchange was calculated on a tidal time-scale and compared to SSC measurements at the Alcatraz monitoring site located in Central San Francisco Bay ~ 5 km from the Golden Gate. Numerically modeled tide averaged flux across the Golden Gate compared well (r2 = 0.86, p-value

  14. Study Quantifies Physical Demands of Yoga in Seniors

    MedlinePlus

    ... Z Study Quantifies Physical Demands of Yoga in Seniors Share: A recent NCCAM-funded study measured the ... performance of seven standing poses commonly taught in senior yoga classes: Chair, Wall Plank, Tree, Warrior II, ...

  15. Gradient approach to quantify the gradation smoothness for output media

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  16. Biophysical Mechanistic Modelling Quantifies the Effects of Plant Traits on Fire Severity: Species, Not Surface Fuel Loads, Determine Flame Dimensions in Eucalypt Forests

    PubMed Central

    Bedward, Michael; Penman, Trent D.; Doherty, Michael D.; Weber, Rodney O.; Gill, A. Malcolm; Cary, Geoffrey J.

    2016-01-01

    The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this. PMID:27529789

  17. Biophysical Mechanistic Modelling Quantifies the Effects of Plant Traits on Fire Severity: Species, Not Surface Fuel Loads, Determine Flame Dimensions in Eucalypt Forests.

    PubMed

    Zylstra, Philip; Bradstock, Ross A; Bedward, Michael; Penman, Trent D; Doherty, Michael D; Weber, Rodney O; Gill, A Malcolm; Cary, Geoffrey J

    2016-01-01

    The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this.

  18. Heuristic extraction of rules in pruned artificial neural networks models used for quantifying highly overlapping chromatographic peaks.

    PubMed

    Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva

    2004-01-01

    The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society

  19. Quantifying the Value of Downscaled Climate Model Information for Adaptation Decisions: When is Downscaling a Smart Decision?

    NASA Astrophysics Data System (ADS)

    Terando, A. J.; Wootten, A.; Eaton, M. J.; Runge, M. C.; Littell, J. S.; Bryan, A. M.; Carter, S. L.

    2015-12-01

    Two types of decisions face society with respect to anthropogenic climate change: (1) whether to enact a global greenhouse gas abatement policy, and (2) how to adapt to the local consequences of current and future climatic changes. The practice of downscaling global climate models (GCMs) is often used to address (2) because GCMs do not resolve key features that will mediate global climate change at the local scale. In response, the development of downscaling techniques and models has accelerated to aid decision makers seeking adaptation guidance. However, quantifiable estimates of the value of information are difficult to obtain, particularly in decision contexts characterized by deep uncertainty and low system-controllability. Here we demonstrate a method to quantify the additional value that decision makers could expect if research investments are directed towards developing new downscaled climate projections. As a proof of concept we focus on a real-world management problem: whether to undertake assisted migration for an endangered tropical avian species. We also take advantage of recently published multivariate methods that account for three vexing issues in climate impacts modeling: maximizing climate model quality information, accounting for model dependence in ensembles of opportunity, and deriving probabilistic projections. We expand on these global methods by including regional (Caribbean Basin) and local (Puerto Rico) domains. In the local domain, we test whether a high resolution (2km) dynamically downscaled GCM reduces the multivariate error estimate compared to the original coarse-scale GCM. Initial tests show little difference between the downscaled and original GCM multivariate error. When propagated through to a species population model, the Value of Information analysis indicates that the expected utility that would accrue to the manager (and species) if this downscaling were completed may not justify the cost compared to alternative actions.

  20. Using High Resolution Regional Climate Models to Quantify the Snow Albedo Feedback in a Region of Complex Terrain

    NASA Astrophysics Data System (ADS)

    Letcher, T.; Minder, J. R.

    2015-12-01

    High resolution regional climate models are used to characterize and quantify the snow albedo feedback (SAF) over the complex terrain of the Colorado Headwaters region. Three pairs of 7-year control and pseudo global warming simulations (with horizontal grid spacings of 4, 12, and 36 km) are used to study how the SAF modifies the regional climate response to a large-scale thermodynamic perturbation. The SAF substantially enhances warming within the Headwaters domain, locally as much as 5 °C in regions of snow loss. The SAF also increases the inter-annual variability of the springtime warming within Headwaters domain under the perturbed climate. Linear feedback analysis is used quantify the strength of the SAF. The SAF attains a maximum value of 4 W m-2 K-1 during April when snow loss coincides with strong incoming solar radiation. On sub-seasonal timescales, simulations at 4 km and 12 km horizontal grid-spacing show good agreement in the strength and timing of the SAF, whereas a 36km simulation shows greater discrepancies that are tired to differences in snow accumulation and ablation caused by smoother terrain. An analysis of the regional energy budget shows that transport by atmospheric motion acts as a negative feedback to regional warming, damping the effects of the SAF. On the mesoscale, this transport causes non-local warming in locations with no snow. The methods presented here can be used generally to quantify the role of the SAF in other regional climate modeling experiments.

  1. Quantifying resilience

    USGS Publications Warehouse

    Allen, Craig R.; Angeler, David G.

    2016-01-01

    Several frameworks to operationalize resilience have been proposed. A decade ago, a special feature focused on quantifying resilience was published in the journal Ecosystems (Carpenter, Westley & Turner 2005). The approach there was towards identifying surrogates of resilience, but few of the papers proposed quantifiable metrics. Consequently, many ecological resilience frameworks remain vague and difficult to quantify, a problem that this special feature aims to address. However, considerable progress has been made during the last decade (e.g. Pope, Allen & Angeler 2014). Although some argue that resilience is best kept as an unquantifiable, vague concept (Quinlan et al. 2016), to be useful for managers, there must be concrete guidance regarding how and what to manage and how to measure success (Garmestani, Allen & Benson 2013; Spears et al. 2015). Ideas such as ‘resilience thinking’ have utility in helping stakeholders conceptualize their systems, but provide little guidance on how to make resilience useful for ecosystem management, other than suggesting an ambiguous, Goldilocks approach of being just right (e.g. diverse, but not too diverse; connected, but not too connected). Here, we clarify some prominent resilience terms and concepts, introduce and synthesize the papers in this special feature on quantifying resilience and identify core unanswered questions related to resilience.

  2. The comprehension and production of quantifiers in isiXhosa-speaking Grade 1 learners

    PubMed Central

    Southwood, Frenette

    2016-01-01

    Background Quantifiers form part of the discourse-internal linguistic devices that children need to access and produce narratives and other classroom discourse. Little is known about the development - especially the prodiction - of quantifiers in child language, specifically in speakers of an African language. Objectives The study aimed to ascertain how well Grade 1 isiXhosa first language (L1) learners perform at the beginning and at the end of Grade 1 on quantifier comprehension and production tasks. Method Two low socioeconomic groups of L1 isiXhosa learners with either isiXhosa or English as language of learning and teaching (LOLT) were tested in February and November of their Grade 1 year with tasks targeting several quantifiers. Results The isiXhosa LOLT group comprehended no/none, any and all fully either in February or then in November of Grade 1, and they produced all assessed quantifiers in February of Grade 1. For the English LOLT group, neither the comprehension nor the production of quantifiers was mastered by the end of Grade 1, although there was a significant increase in both their comprehension and production scores. Conclusion The English LOLT group made significant progress in comprehension and production of quantifiers, but still performed worse than peers who had their L1 as LOLT. Generally, children with no or very little prior knowledge of the LOLT need either, (1) more deliberate exposure to quantifier-rich language or, (2) longer exposure to general classroom language before quantifiers can be expected to be mastered sufficiently to allow access to quantifier-related curriculum content. PMID:27245132

  3. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  4. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  5. Occupant behavior models: A critical review of implementation and representation approaches in building performance simulation programs

    DOE PAGES

    Hong, Tianzhen; Chen, Yixing; Belafi, Zsofia; ...

    2017-07-27

    Occupant behavior (OB) in buildings is a leading factor influencing energy use in buildings. Quantifying this influence requires the integration of OB models with building performance simulation (BPS). This study reviews approaches to representing and implementing OB models in today’s popular BPS programs, and discusses weaknesses and strengths of these approaches and key issues in integrating of OB models with BPS programs. Two of the key findings are: (1) a common data model is needed to standardize the representation of OB models, enabling their flexibility and exchange among BPS programs and user applications; the data model can be implemented usingmore » a standard syntax (e.g., in the form of XML schema), and (2) a modular software implementation of OB models, such as functional mock-up units for co-simulation, adopting the common data model, has advantages in providing a robust and interoperable integration with multiple BPS programs. Such common OB model representation and implementation approaches help standardize the input structures of OB models, enable collaborative development of a shared library of OB models, and allow for rapid and widespread integration of OB models with BPS programs to improve the simulation of occupant behavior and quantification of their impact on building performance.« less

  6. Incorporating both physical and kinetic limitations in quantifying dissolved oxygen flux to aquatic sediments

    USGS Publications Warehouse

    O'Connor, B.L.; Hondzo, Miki; Harvey, J.W.

    2009-01-01

    Traditionally, dissolved oxygen (DO) fluxes have been calculated using the thin-film theory with DO microstructure data in systems characterized by fine sediments and low velocities. However, recent experimental evidence of fluctuating DO concentrations near the sediment-water interface suggests that turbulence and coherent motions control the mass transfer, and the surface renewal theory gives a more mechanistic model for quantifying fluxes. Both models involve quantifying the mass transfer coefficient (k) and the relevant concentration difference (??C). This study compared several empirical models for quantifying k based on both thin-film and surface renewal theories, as well as presents a new method for quantifying ??C (dynamic approach) that is consistent with the observed DO concentration fluctuations near the interface. Data were used from a series of flume experiments that includes both physical and kinetic uptake limitations of the flux. Results indicated that methods for quantifying k and ??C using the surface renewal theory better estimated the DO flux across a range of fluid-flow conditions. ?? 2009 ASCE.

  7. Modeling a Mathematical to Quantify the Degree of Emergency Department Crowding

    NASA Astrophysics Data System (ADS)

    Chang, Y.; Pan, C.; Wen, J.

    2012-12-01

    The purpose of this study is to deduce a function from the admissions/discharge rate of patient flow to estimate a "Critical Point" that provides a reference for warning systems in regards to crowding in the emergency department (ED) of a hospital or medical clinic. In this study, a model of "Input-Throughput-Output" was used in our established mathematical function to evaluate the critical point. The function was defined as ∂ρ/∂t=-K×∂ρ/∂x , where ρ= number of patients per unit distance (also called density), t= time, x= distance, K= distance of patients movement per unit time. Using the average K of ED crowding, we could initiate the warning system at appropriate time and plan necessary emergency response to facilitate the patient process more smoothly. It was concluded that ED crowding can be quantified using the average value of K, and the value can be used as a reference for medical staff to give optimal emergency medical treatment to patients. Therefore, additional practical work should be launched to collect more precise quantitative data.

  8. Model-based synthesis of aircraft noise to quantify human perception of sound quality and annoyance

    NASA Astrophysics Data System (ADS)

    Berckmans, D.; Janssens, K.; Van der Auweraer, H.; Sas, P.; Desmet, W.

    2008-04-01

    This paper presents a method to synthesize aircraft noise as perceived on the ground. The developed method gives designers the opportunity to make a quick and economic evaluation concerning sound quality of different design alternatives or improvements on existing aircraft. By presenting several synthesized sounds to a jury, it is possible to evaluate the quality of different aircraft sounds and to construct a sound that can serve as a target for future aircraft designs. The combination of using a sound synthesis method that can perform changes to a recorded aircraft sound together with executing jury tests allows to quantify the human perception of aircraft noise.

  9. Appropriate Objective Functions for Quantifying Iris Mechanical Properties Using Inverse Finite Element Modeling.

    PubMed

    Pant, Anup D; Dorairaj, Syril K; Amini, Rouzbeh

    2018-07-01

    Quantifying the mechanical properties of the iris is important, as it provides insight into the pathophysiology of glaucoma. Recent ex vivo studies have shown that the mechanical properties of the iris are different in glaucomatous eyes as compared to normal ones. Notwithstanding the importance of the ex vivo studies, such measurements are severely limited for diagnosis and preclude development of treatment strategies. With the advent of detailed imaging modalities, it is possible to determine the in vivo mechanical properties using inverse finite element (FE) modeling. An inverse modeling approach requires an appropriate objective function for reliable estimation of parameters. In the case of the iris, numerous measurements such as iris chord length (CL) and iris concavity (CV) are made routinely in clinical practice. In this study, we have evaluated five different objective functions chosen based on the iris biometrics (in the presence and absence of clinical measurement errors) to determine the appropriate criterion for inverse modeling. Our results showed that in the absence of experimental measurement error, a combination of iris CL and CV can be used as the objective function. However, with the addition of measurement errors, the objective functions that employ a large number of local displacement values provide more reliable outcomes.

  10. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  11. Quantifying Impacts of Land-use and Land Cover Change in a Changing Climate at the Regional Scale using an Integrated Earth System Modeling Approach

    NASA Astrophysics Data System (ADS)

    Huang, M.

    2016-12-01

    Earth System models (ESMs) are effective tools for investigating the water-energy-food system interactions under climate change. In this presentation, I will introduce research efforts at the Pacific Northwest National Laboratory towards quantifying impacts of LULCC on the water-energy-food nexus in a changing climate using an integrated regional Earth system modeling framework: the Platform for Regional Integrated Modeling and Analysis (PRIMA). Two studies will be discussed to showcase the capability of PRIMA: (1) quantifying changes in terrestrial hydrology over the Conterminous US (CONUS) from 2005 to 2095 using the Community Land Model (CLM) driven by high-resolution downscaled climate and land cover products from PRIMA, which was designed for assessing the impacts of and potential responses to climate and anthropogenic changes at regional scales; (2) applying CLM over the CONUS to provide the first county-scale model validation in simulating crop yields and assessing associated impacts on the water and energy budgets using CLM. The studies demonstrate the benefits of incorporating and coupling human activities into complex ESMs, and critical needs to account for the biogeophysical and biogeochemical effects of LULCC in climate impacts studies, and in designing mitigation and adaptation strategies at a scale meaningful for decision-making. Future directions in quantifying LULCC impacts on the water-energy-food nexus under a changing climate, as well as feedbacks among climate, energy production and consumption, and natural/managed ecosystems using an Integrated Multi-scale, Multi-sector Modeling framework will also be discussed.

  12. Performance assessment of Large Eddy Simulation (LES) for modeling dispersion in an urban street canyon with tree planting

    NASA Astrophysics Data System (ADS)

    Moonen, P.; Gromke, C.; Dorer, V.

    2013-08-01

    The potential of a Large Eddy Simulation (LES) model to reliably predict near-field pollutant dispersion is assessed. To that extent, detailed time-resolved numerical simulations of coupled flow and dispersion are conducted for a street canyon with tree planting. Different crown porosities are considered. The model performance is assessed in several steps, ranging from a qualitative comparison to measured concentrations, over statistical data analysis by means of scatter plots and box plots, up to the calculation of objective validation metrics. The extensive validation effort highlights and quantifies notable features and shortcomings of the model, which would otherwise remain unnoticed. The model performance is found to be spatially non-uniform. Closer agreement with measurement data is achieved near the canyon ends than for the central part of the canyon, and typical model acceptance criteria are satisfied more easily for the leeward than for the windward canyon wall. This demonstrates the need for rigorous model evaluation. Only quality-assured models can be used with confidence to support assessment, planning and implementation of pollutant mitigation strategies.

  13. Quantifying soil burn severity for hydrologic modeling to assess post-fire effects on sediment delivery

    NASA Astrophysics Data System (ADS)

    Dobre, Mariana; Brooks, Erin; Lew, Roger; Kolden, Crystal; Quinn, Dylan; Elliot, William; Robichaud, Pete

    2017-04-01

    Soil erosion is a secondary fire effect with great implications for many ecosystem resources. Depending on the burn severity, topography, and the weather immediately after the fire, soil erosion can impact municipal water supplies, degrade water quality, and reduce reservoirs' storage capacity. Scientists and managers use field and remotely sensed data to quickly assess post-fire burn severity in ecologically-sensitive areas. From these assessments, mitigation activities are implemented to minimize post-fire flood and soil erosion and to facilitate post-fire vegetation recovery. Alternatively, land managers can use fire behavior and spread models (e.g. FlamMap, FARSITE, FOFEM, or CONSUME) to identify sensitive areas a priori, and apply strategies such as fuel reduction treatments to proactively minimize the risk of wildfire spread and increased burn severity. There is a growing interest in linking fire behavior and spread models with hydrology-based soil erosion models to provide site-specific assessment of mitigation treatments on post-fire runoff and erosion. The challenge remains, however, that many burn severity mapping and modeling products quantify vegetation loss rather than measuring soil burn severity. Wildfire burn severity is spatially heterogeneous and depends on the pre-fire vegetation cover, fuel load, topography, and weather. Severities also differ depending on the variable of interest (e.g. soil, vegetation). In the United States, Burned Area Reflectance Classification (BARC) maps, derived from Landsat satellite images, are used as an initial burn severity assessment. BARC maps are classified from either a Normalized Burn Ratio (NBR) or differenced Normalized Burned Ratio (dNBR) scene into four classes (Unburned, Low, Moderate, and High severity). The development of soil burn severity maps requires further manual field validation efforts to transform the BARC maps into a product more applicable for post-fire soil rehabilitation activities

  14. Ion thruster performance model

    NASA Technical Reports Server (NTRS)

    Brophy, J. R.

    1984-01-01

    A model of ion thruster performance is developed for high flux density, cusped magnetic field thruster designs. This model is formulated in terms of the average energy required to produce an ion in the discharge chamber plasma and the fraction of these ions that are extracted to form the beam. The direct loss of high energy (primary) electrons from the plasma to the anode is shown to have a major effect on thruster performance. The model provides simple algebraic equations enabling one to calculate the beam ion energy cost, the average discharge chamber plasma ion energy cost, the primary electron density, the primary-to-Maxwellian electron density ratio and the Maxwellian electron temperature. Experiments indicate that the model correctly predicts the variation in plasma ion energy cost for changes in propellant gas (Ar, Kr and Xe), grid transparency to neutral atoms, beam extraction area, discharge voltage, and discharge chamber wall temperature. The model and experiments indicate that thruster performance may be described in terms of only four thruster configuration dependent parameters and two operating parameters. The model also suggests that improved performance should be exhibited by thruster designs which extract a large fraction of the ions produced in the discharge chamber, which have good primary electron and neutral atom containment and which operate at high propellant flow rates.

  15. A new image-based process for quantifying hemodynamic contributions to long-term morbidity in a rabbit model of aortic coarctation

    NASA Astrophysics Data System (ADS)

    Wendell, David C.; Dholakia, Ronak J.; Larsen, Paul M.; Menon, Arjun; LaDisa, John F., Jr.

    2010-03-01

    Coarctation of the aorta (CoA) is associated with reduced life expectancy despite successful surgical treatment. Interestingly, much of the related long-term morbidity can be explained by abnormal hemodynamics, vascular biomechanics and cardiac function. MRI has played an important role in assessing coarctation severity, but the heterogeneity and small number of patients at each center presents an obstacle for determining causality. This work describes optimized imaging parameters to create computational fluid dynamics (CFD) models revealing changes in hemodynamics and vascular biomechanics from a rabbit model. CoA was induced surgically at 10 weeks using silk or dissolvable ligatures to replicate native and end-to-end treatment cases, respectively. Cardiac function was evaluated at 32 weeks using a fastcard SPGR sequence in 6-8 two-chamber short-axis views. Left ventricular (LV) volume, ejection fraction, and mass were quantified and compared to control rabbits. Phase contrast (PC) and angiographic MRI were used to create CFD models. Ascending aortic PCMRI data were mapped to the model inflow and outflow boundary conditions replicated measured pressure (BP) and flow. CFD simulations were performed using a stabilized finite element method to calculate indices including velocity, BP and wall shear stress (WSS). CoA models displayed higher velocity through the coarctation region and decreased velocity elsewhere, leading to decreased WSS above and below the stenosis. Pronounced wall displacement was associated with CoA-induced changes in BP. CoA caused reversible LV hypertrophy. Cardiac function was maintained, but caused a persistent hyperdynamic state. This model may now be used to investigate potential mechanisms of long-term morbidity.

  16. QUANTIFYING AN UNCERTAIN FUTURE: HYDROLOGIC MODEL PERFORMANCE FOR A SERIES OF REALIZED "FUTURE" CONDITIONS

    EPA Science Inventory

    GIS-based hydrologic modeling offers a convenient means of assessing the impacts associated with land-cover/use change for environmental planning efforts. Future scenarios can be developed through a combination of modifications to the land-cover/use maps used to parameterize hydr...

  17. A model based on Rock-Eval thermal analysis to quantify the size of the centennially persistent organic carbon pool in temperate soils

    NASA Astrophysics Data System (ADS)

    Cécillon, Lauric; Baudin, François; Chenu, Claire; Houot, Sabine; Jolivet, Romain; Kätterer, Thomas; Lutfalla, Suzanne; Macdonald, Andy; van Oort, Folkert; Plante, Alain F.; Savignac, Florence; Soucémarianadin, Laure N.; Barré, Pierre

    2018-05-01

    of 0.15). Samples were subjected to thermal analysis by Rock-Eval 6 that generated a series of 30 parameters reflecting their SOC thermal stability and bulk chemistry. We trained a nonparametric machine-learning algorithm (random forests multivariate regression model) to predict the proportion of centennially persistent SOC in new soils using Rock-Eval 6 thermal parameters as predictors. We evaluated the model predictive performance with two different strategies. We first used a calibration set (n = 88) and a validation set (n = 30) with soils from all sites. Second, to test the sensitivity of the model to pedoclimate, we built a calibration set with soil samples from three out of the four sites (n = 84). The multivariate regression model accurately predicted the proportion of centennially persistent SOC in the validation set composed of soils from all sites (R2 = 0.92, RMSEP = 0.07, n = 30). The uncertainty of the model predictions was quantified by a Monte Carlo approach that produced conservative 95 % prediction intervals across the validation set. The predictive performance of the model decreased when predicting the proportion of centennially persistent SOC in soils from one fully independent site with a different pedoclimate, yet the mean error of prediction only slightly increased (R2 = 0.53, RMSEP = 0.10, n = 34). This model based on Rock-Eval 6 thermal analysis can thus be used to predict the proportion of centennially persistent SOC with known uncertainty in new soil samples from different pedoclimates, at least for sites that have similar Rock-Eval 6 thermal characteristics to those included in the calibration set. Our study reinforces the evidence that there is a link between the thermal and biogeochemical stability of soil organic matter and demonstrates that Rock-Eval 6 thermal analysis can be used to quantify the size of the centennially persistent organic carbon pool in temperate soils.

  18. Quantifying geological uncertainty for flow and transport modeling in multi-modal heterogeneous formations

    NASA Astrophysics Data System (ADS)

    Feyen, Luc; Caers, Jef

    2006-06-01

    In this work, we address the problem of characterizing the heterogeneity and uncertainty of hydraulic properties for complex geological settings. Hereby, we distinguish between two scales of heterogeneity, namely the hydrofacies structure and the intrafacies variability of the hydraulic properties. We employ multiple-point geostatistics to characterize the hydrofacies architecture. The multiple-point statistics are borrowed from a training image that is designed to reflect the prior geological conceptualization. The intrafacies variability of the hydraulic properties is represented using conventional two-point correlation methods, more precisely, spatial covariance models under a multi-Gaussian spatial law. We address the different levels and sources of uncertainty in characterizing the subsurface heterogeneity, and explore their effect on groundwater flow and transport predictions. Typically, uncertainty is assessed by way of many images, termed realizations, of a fixed statistical model. However, in many cases, sampling from a fixed stochastic model does not adequately represent the space of uncertainty. It neglects the uncertainty related to the selection of the stochastic model and the estimation of its input parameters. We acknowledge the uncertainty inherent in the definition of the prior conceptual model of aquifer architecture and in the estimation of global statistics, anisotropy, and correlation scales. Spatial bootstrap is used to assess the uncertainty of the unknown statistical parameters. As an illustrative example, we employ a synthetic field that represents a fluvial setting consisting of an interconnected network of channel sands embedded within finer-grained floodplain material. For this highly non-stationary setting we quantify the groundwater flow and transport model prediction uncertainty for various levels of hydrogeological uncertainty. Results indicate the importance of accurately describing the facies geometry, especially for transport

  19. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  20. Tobacco-free economy: A SAM-based multiplier model to quantify the impact of changes in tobacco demand in Bangladesh.

    PubMed

    Husain, Muhammad Jami; Khondker, Bazlul Haque

    2016-01-01

    In Bangladesh, where tobacco use is pervasive, reducing tobacco use is economically beneficial. This paper uses the latest Bangladesh social accounting matrix (SAM) multiplier model to quantify the economy-wide impact of demand-driven changes in tobacco cultivation, bidi industries, and cigarette industries. First, we compute various income multiplier values (i.e. backward linkages) for all production activities in the economy to quantify the impact of changes in demand for the corresponding products on gross output for 86 activities, demand for 86 commodities, returns to four factors of production, and income for eight household groups. Next, we rank tobacco production activities by income multiplier values relative to other sectors. Finally, we present three hypothetical 'tobacco-free economy' scenarios by diverting demand from tobacco products into other sectors of the economy and quantifying the economy-wide impact. The simulation exercises with three different tobacco-free scenarios show that, compared to the baseline values, total sectoral output increases by 0.92%, 1.3%, and 0.75%. The corresponding increases in the total factor returns (i.e. GDP) are 1.57%, 1.75%, and 1.75%. Similarly, total household income increases by 1.40%, 1.58%, and 1.55%.

  1. Tobacco-free economy: A SAM-based multiplier model to quantify the impact of changes in tobacco demand in Bangladesh

    PubMed Central

    Husain, Muhammad Jami; Khondker, Bazlul Haque

    2017-01-01

    In Bangladesh, where tobacco use is pervasive, reducing tobacco use is economically beneficial. This paper uses the latest Bangladesh social accounting matrix (SAM) multiplier model to quantify the economy-wide impact of demand-driven changes in tobacco cultivation, bidi industries, and cigarette industries. First, we compute various income multiplier values (i.e. backward linkages) for all production activities in the economy to quantify the impact of changes in demand for the corresponding products on gross output for 86 activities, demand for 86 commodities, returns to four factors of production, and income for eight household groups. Next, we rank tobacco production activities by income multiplier values relative to other sectors. Finally, we present three hypothetical ‘tobacco-free economy’ scenarios by diverting demand from tobacco products into other sectors of the economy and quantifying the economy-wide impact. The simulation exercises with three different tobacco-free scenarios show that, compared to the baseline values, total sectoral output increases by 0.92%, 1.3%, and 0.75%. The corresponding increases in the total factor returns (i.e. GDP) are 1.57%, 1.75%, and 1.75%. Similarly, total household income increases by 1.40%, 1.58%, and 1.55%. PMID:28845091

  2. Clinical laboratory as an economic model for business performance analysis

    PubMed Central

    Buljanović, Vikica; Patajac, Hrvoje; Petrovečki, Mladen

    2011-01-01

    Aim To perform SWOT (strengths, weaknesses, opportunities, and threats) analysis of a clinical laboratory as an economic model that may be used to improve business performance of laboratories by removing weaknesses, minimizing threats, and using external opportunities and internal strengths. Methods Impact of possible threats to and weaknesses of the Clinical Laboratory at Našice General County Hospital business performance and use of strengths and opportunities to improve operating profit were simulated using models created on the basis of SWOT analysis results. The operating profit as a measure of profitability of the clinical laboratory was defined as total revenue minus total expenses and presented using a profit and loss account. Changes in the input parameters in the profit and loss account for 2008 were determined using opportunities and potential threats, and economic sensitivity analysis was made by using changes in the key parameters. The profit and loss account and economic sensitivity analysis were tools for quantifying the impact of changes in the revenues and expenses on the business operations of clinical laboratory. Results Results of simulation models showed that operational profit of €470 723 in 2008 could be reduced to only €21 542 if all possible threats became a reality and current weaknesses remained the same. Also, operational gain could be increased to €535 804 if laboratory strengths and opportunities were utilized. If both the opportunities and threats became a reality, the operational profit would decrease by €384 465. Conclusion The operational profit of the clinical laboratory could be significantly reduced if all threats became a reality and the current weaknesses remained the same. The operational profit could be increased by utilizing strengths and opportunities as much as possible. This type of modeling may be used to monitor business operations of any clinical laboratory and improve its financial situation by

  3. A Computational Framework for Quantifying and Optimizing the Performance of Observational Networks in 4D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Cioaca, Alexandru

    A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimila- tion is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as

  4. Root-shoot growth responses during interspecific competition quantified using allometric modelling.

    PubMed

    Robinson, David; Davidson, Hazel; Trinder, Clare; Brooker, Rob

    2010-12-01

    Plant competition studies are restricted by the difficulty of quantifying root systems of competitors. Analyses are usually limited to above-ground traits. Here, a new approach to address this issue is reported. Root system weights of competing plants can be estimated from: shoot weights of competitors; combined root weights of competitors; and slopes (scaling exponents, α) and intercepts (allometric coefficients, β) of ln-regressions of root weight on shoot weight of isolated plants. If competition induces no change in root : shoot growth, α and β values of competing and isolated plants will be equal. Measured combined root weight of competitors will equal that estimated allometrically from measured shoot weights of each competing plant. Combined root weights can be partitioned directly among competitors. If, as will be more usual, competition changes relative root and shoot growth, the competitors' combined root weight will not equal that estimated allometrically and cannot be partitioned directly. However, if the isolated-plant α and β values are adjusted until the estimated combined root weight of competitors matches the measured combined root weight, the latter can be partitioned among competitors using their new α and β values. The approach is illustrated using two herbaceous species, Dactylis glomerata and Plantago lanceolata. Allometric modelling revealed a large and continuous increase in the root : shoot ratio by Dactylis, but not Plantago, during competition. This was associated with a superior whole-plant dry weight increase in Dactylis, which was ultimately 2·5-fold greater than that of Plantago. Whole-plant growth dominance of Dactylis over Plantago, as deduced from allometric modelling, occurred 14-24 d earlier than suggested by shoot data alone. Given reasonable assumptions, allometric modelling can analyse competitive interactions in any species mixture, and overcomes a long-standing problem in studies of competition.

  5. Modeling silica aerogel optical performance by determining its radiative properties

    NASA Astrophysics Data System (ADS)

    Zhao, Lin; Yang, Sungwoo; Bhatia, Bikram; Strobach, Elise; Wang, Evelyn N.

    2016-02-01

    Silica aerogel has been known as a promising candidate for high performance transparent insulation material (TIM). Optical transparency is a crucial metric for silica aerogels in many solar related applications. Both scattering and absorption can reduce the amount of light transmitted through an aerogel slab. Due to multiple scattering, the transmittance deviates from the Beer-Lambert law (exponential attenuation). To better understand its optical performance, we decoupled and quantified the extinction contributions of absorption and scattering separately by identifying two sets of radiative properties. The radiative properties are deduced from the measured total transmittance and reflectance spectra (from 250 nm to 2500 nm) of synthesized aerogel samples by solving the inverse problem of the 1-D Radiative Transfer Equation (RTE). The obtained radiative properties are found to be independent of the sample geometry and can be considered intrinsic material properties, which originate from the aerogel's microstructure. This finding allows for these properties to be directly compared between different samples. We also demonstrate that by using the obtained radiative properties, we can model the photon transport in aerogels of arbitrary shapes, where an analytical solution is difficult to obtain.

  6. Quantifying Anthropogenic Dust Emissions

    NASA Astrophysics Data System (ADS)

    Webb, Nicholas P.; Pierre, Caroline

    2018-02-01

    Anthropogenic land use and land cover change, including local environmental disturbances, moderate rates of wind-driven soil erosion and dust emission. These human-dust cycle interactions impact ecosystems and agricultural production, air quality, human health, biogeochemical cycles, and climate. While the impacts of land use activities and land management on aeolian processes can be profound, the interactions are often complex and assessments of anthropogenic dust loads at all scales remain highly uncertain. Here, we critically review the drivers of anthropogenic dust emission and current evaluation approaches. We then identify and describe opportunities to: (1) develop new conceptual frameworks and interdisciplinary approaches that draw on ecological state-and-transition models to improve the accuracy and relevance of assessments of anthropogenic dust emissions; (2) improve model fidelity and capacity for change detection to quantify anthropogenic impacts on aeolian processes; and (3) enhance field research and monitoring networks to support dust model applications to evaluate the impacts of disturbance processes on local to global-scale wind erosion and dust emissions.

  7. Using nitrate to quantify quick flow in a karst aquifer

    USGS Publications Warehouse

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  8. Translation from UML to Markov Model: A Performance Modeling Framework

    NASA Astrophysics Data System (ADS)

    Khan, Razib Hayat; Heegaard, Poul E.

    Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

  9. Quantifying Drosophila food intake: comparative analysis of current methodology

    PubMed Central

    Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.

    2014-01-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694

  10. Phenobarbital in intensive care unit pediatric population: predictive performances of population pharmacokinetic model.

    PubMed

    Marsot, Amélie; Michel, Fabrice; Chasseloup, Estelle; Paut, Olivier; Guilhaumou, Romain; Blin, Olivier

    2017-10-01

    An external evaluation of phenobarbital population pharmacokinetic model described by Marsot et al. was performed in pediatric intensive care unit. Model evaluation is an important issue for dose adjustment. This external evaluation should allow confirming the proposed dosage adaptation and extending these recommendations to the entire intensive care pediatric population. External evaluation of phenobarbital published population pharmacokinetic model of Marsot et al. was realized in a new retrospective dataset of 35 patients hospitalized in a pediatric intensive care unit. The published population pharmacokinetic model was implemented in nonmem 7.3. Predictive performance was assessed by quantifying bias and inaccuracy of model prediction. Normalized prediction distribution errors (NPDE) and visual predictive check (VPC) were also evaluated. A total of 35 infants were studied with a mean age of 33.5 weeks (range: 12 days-16 years) and a mean weight of 12.6 kg (range: 2.7-70.0 kg). The model predicted the observed phenobarbital concentrations with a reasonable bias and inaccuracy. The median prediction error was 3.03% (95% CI: -8.52 to 58.12%), and the median absolute prediction error was 26.20% (95% CI: 13.07-75.59%). No trends in NPDE and VPC were observed. The model previously proposed by Marsot et al. in neonates hospitalized in intensive care unit was externally validated for IV infusion administration. The model-based dosing regimen was extended in all pediatric intensive care unit to optimize treatment. Due to inter- and intravariability in pharmacokinetic model, this dosing regimen should be combined with therapeutic drug monitoring. © 2017 Société Française de Pharmacologie et de Thérapeutique.

  11. A model of motor performance during surface penetration: from physics to voluntary control.

    PubMed

    Klatzky, Roberta L; Gershon, Pnina; Shivaprabhu, Vikas; Lee, Randy; Wu, Bing; Stetten, George; Swendsen, Robert H

    2013-10-01

    The act of puncturing a surface with a hand-held tool is a ubiquitous but complex motor behavior that requires precise force control to avoid potentially severe consequences. We present a detailed model of puncture over a time course of approximately 1,000 ms, which is fit to kinematic data from individual punctures, obtained via a simulation with high-fidelity force feedback. The model describes puncture as proceeding from purely physically determined interactions between the surface and tool, through decline of force due to biomechanical viscosity, to cortically mediated voluntary control. When fit to the data, it yields parameters for the inertial mass of the tool/person coupling, time characteristic of force decline, onset of active braking, stopping time and distance, and late oscillatory behavior, all of which the analysis relates to physical variables manipulated in the simulation. While the present data characterize distinct phases of motor performance in a group of healthy young adults, the approach could potentially be extended to quantify the performance of individuals from other populations, e.g., with sensory-motor impairments. Applications to surgical force control devices are also considered.

  12. Lamb Wave Dispersion Ultrasound Vibrometry (LDUV) Method for Quantifying Mechanical Properties of Viscoelastic Solids

    PubMed Central

    Nenadic, Ivan Z.; Urban, Matthew W.; Mitchell, Scott A.; Greenleaf, James F.

    2011-01-01

    Diastolic dysfunction is the inability of the left ventricle to supply sufficient stroke volumes under normal physiological conditions and is often accompanied by stiffening of the left-ventricular myocardium. A noninvasive technique capable of quantifying viscoelasticity of the myocardium would be beneficial in clinical settings. Our group has been investigating the use of Shearwave Dispersion Ultrasound Vibrometry (SDUV), a noninvasive ultrasound based method for quantifying viscoelasticity of soft tissues. The primary motive of this study is the design and testing of viscoelastic materials suitable for validation of the Lamb wave Dispersion Ultrasound Vibrometry (LDUV), an SDUV-based technique for measuring viscoelasticity of tissues with plate-like geometry. We report the results of quantifying viscoelasticity of urethane rubber and gelatin samples using LDUV and an embedded sphere method. The LDUV method was used to excite antisymmetric Lamb waves and measure the dispersion in urethane rubber and gelatin plates. An antisymmetric Lamb wave model was fitted to the wave speed dispersion data to estimate elasticity and viscosity of the materials. A finite element model of a viscoelastic plate submerged in water was used to study the appropriateness of the Lamb wave dispersion equations. An embedded sphere method was used as an independent measurement of the viscoelasticity of the urethane rubber and gelatin. The FEM dispersion data were in excellent agreement with the theoretical predictions. Viscoelasticity of the urethane rubber and gelatin obtained using the LDUV and embedded sphere methods agreed within one standard deviation. LDUV studies on excised porcine myocardium sample were performed to investigate the feasibility of the approach in preparation for open-chest in vivo studies. The results suggest that the LDUV technique can be used to quantify mechanical properties of soft tissues with a plate-like geometry. PMID:21403186

  13. Lamb wave dispersion ultrasound vibrometry (LDUV) method for quantifying mechanical properties of viscoelastic solids.

    PubMed

    Nenadic, Ivan Z; Urban, Matthew W; Mitchell, Scott A; Greenleaf, James F

    2011-04-07

    Diastolic dysfunction is the inability of the left ventricle to supply sufficient stroke volumes under normal physiological conditions and is often accompanied by stiffening of the left-ventricular myocardium. A noninvasive technique capable of quantifying viscoelasticity of the myocardium would be beneficial in clinical settings. Our group has been investigating the use of shear wave dispersion ultrasound vibrometry (SDUV), a noninvasive ultrasound-based method for quantifying viscoelasticity of soft tissues. The primary motive of this study is the design and testing of viscoelastic materials suitable for validation of the Lamb wave dispersion ultrasound vibrometry (LDUV), an SDUV-based technique for measuring viscoelasticity of tissues with plate-like geometry. We report the results of quantifying viscoelasticity of urethane rubber and gelatin samples using LDUV and an embedded sphere method. The LDUV method was used to excite antisymmetric Lamb waves and measure the dispersion in urethane rubber and gelatin plates. An antisymmetric Lamb wave model was fitted to the wave speed dispersion data to estimate elasticity and viscosity of the materials. A finite element model of a viscoelastic plate submerged in water was used to study the appropriateness of the Lamb wave dispersion equations. An embedded sphere method was used as an independent measurement of the viscoelasticity of the urethane rubber and gelatin. The FEM dispersion data were in excellent agreement with the theoretical predictions. Viscoelasticity of the urethane rubber and gelatin obtained using the LDUV and embedded sphere methods agreed within one standard deviation. LDUV studies on excised porcine myocardium sample were performed to investigate the feasibility of the approach in preparation for open-chest in vivo studies. The results suggest that the LDUV technique can be used to quantify the mechanical properties of soft tissues with a plate-like geometry.

  14. Photovoltaic performance models - A report card

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. R.

    1985-01-01

    Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.

  15. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    EPA Science Inventory

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of a...

  16. Quantifying geomorphic change at ephemeral stream restoration sites using a coupled-model approach

    USGS Publications Warehouse

    Norman, Laura M.; Sankey, Joel B.; Dean, David; Caster, Joshua J.; DeLong, Stephen B.; Henderson-DeLong, Whitney; Pelletier, Jon D.

    2017-01-01

    Rock-detention structures are used as restoration treatments to engineer ephemeral stream channels of southeast Arizona, USA, to reduce streamflow velocity, limit erosion, retain sediment, and promote surface-water infiltration. Structures are intended to aggrade incised stream channels, yet little quantified evidence of efficacy is available. The goal of this 3-year study was to characterize the geomorphic impacts of rock-detention structures used as a restoration strategy and develop a methodology to predict the associated changes. We studied reaches of two ephemeral streams with different watershed management histories: one where thousands of loose-rock check dams were installed 30 years prior to our study, and one with structures constructed at the beginning of our study. The methods used included runoff, sediment transport, and geomorphic modelling and repeat terrestrial laser scanner (TLS) surveys to map landscape change. Where discharge data were not available, event-based runoff was estimated using KINEROS2, a one-dimensional kinematic-wave runoff and erosion model. Discharge measurements and estimates were used as input to a two-dimensional unsteady flow-and-sedimentation model (Nays2DH) that combined a gridded flow, transport, and bed and bank simulation with geomorphic change. Through comparison of consecutive DEMs, the potential to substitute uncalibrated models to analyze stream restoration is introduced. We demonstrate a new approach to assess hydraulics and associated patterns of aggradation and degradation resulting from the construction of check-dams and other transverse structures. Notably, we find that stream restoration using rock-detention structures is effective across vastly different timescales.

  17. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch

  18. Human Performance Models of Pilot Behavior

    NASA Technical Reports Server (NTRS)

    Foyle, David C.; Hooey, Becky L.; Byrne, Michael D.; Deutsch, Stephen; Lebiere, Christian; Leiden, Ken; Wickens, Christopher D.; Corker, Kevin M.

    2005-01-01

    Five modeling teams from industry and academia were chosen by the NASA Aviation Safety and Security Program to develop human performance models (HPM) of pilots performing taxi operations and runway instrument approaches with and without advanced displays. One representative from each team will serve as a panelist to discuss their team s model architecture, augmentations and advancements to HPMs, and aviation-safety related lessons learned. Panelists will discuss how modeling results are influenced by a model s architecture and structure, the role of the external environment, specific modeling advances and future directions and challenges for human performance modeling in aviation.

  19. Summary of photovoltaic system performance models

    NASA Technical Reports Server (NTRS)

    Smith, J. H.; Reiter, L. J.

    1984-01-01

    A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.

  20. Quantifying torso deformity in scoliosis

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James

    2006-03-01

    Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.

  1. [Identification of the authentic quality of Longdanxiegan pill by systematic quantified fingerprint method based on three wavelength fusion chromatogram].

    PubMed

    Sun, Guoxiang; Zhang, Jingxian

    2009-05-01

    The three wavelength fusion high performance liquid chromatographic fingerprin (TWFFP) of Longdanxiegan pill (LDXGP) was established to identify the quality of LDXGP by the systematic quantified fingerprint method. The chromatographic fingerprints (CFPs) of the 12 batches of LDXGP were determined by reversed-phase high performance liquid chromatography. The technique of multi-wavelength fusion fingerprint was applied during processing the fingerprints. The TWFFPs containing 63 co-possessing peaks were obtained when choosing baicalin peak as the referential peak. The 12 batches of LDXGP were identified with hierarchical clustering analysis by using macro qualitative similarity (S(m)) as the variable. According to the results of classification, the referential fingerprint (RFP) was synthesized from 10 batches of LDXGP. Taking the RFP for the qualified model, all the 12 batches of LDXGP were evaluated by the systematic quantified fingerprint method. Among the 12 batches of LDXGP, 9 batches were completely qualified, the contents of 1 batch were obviously higher while the chemical constituents quantity and distributed proportion in 2 batches were not qualified. The systematic quantified fingerprint method based on the technique of multi-wavelength fusion fingerprint ca effectively identify the authentic quality of traditional Chinese medicine.

  2. Model Performance Evaluation and Scenario Analysis ...

    EPA Pesticide Factsheets

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too

  3. The use of QLF to quantify in vitro whitening in a product testing model.

    PubMed

    Pretty, I A; Edgar, W M; Higham, S M

    2001-11-24

    Professional and consumer interest in whitening products continues to increase against a background of both increased oral health awareness and demand for cosmetic procedures. In the current legal climate, few dentists are providing 'in-office' whitening treatments, and thus many patients turn to home-use products. The most common of these are the whitening toothpastes. Researchers are keen to quantify the effectiveness of such products through clinically relevant trials. Previous studies examining whitening products have employed a variety of stained substrates to monitor stain removal. This study aimed to quantify the removal of stain from human enamel using a new device, quantitative light-induced fluorescence (QLF). The experimental design follows that of a product-testing model. A total of 11 previously extracted molar teeth were coated with transparent nail varnish leaving an exposed window of enamel. The sound, exposed enamel was subject to a staining regime of human saliva, chlorhexidine and tea. Each of the eleven teeth was subjected to serial exposures of a positive control (Bocasan), a negative control (water) and a test product (Yotuel toothpaste). Following each two-minute exposure QLF images of the teeth were taken (a total of 5 applications). Following completion of one test solution, the teeth were cleaned, re-stained and the procedure repeated with the next solution. QLF images were stored on a PC and analysed by a blinded single examiner. The deltaQ value at 5% threshold was reported. ANOVA and paired t-tests were used to analyse the data. The study confirmed the ability of QLF to longitudinally quantify stain reduction from human enamel. The reliability of the technique in relation to positive and negative test controls was proven. The positive control had a significantly (alpha = 0.05) higher stain removal efficacy than water (p = 0.023) and Yotuel (p = 0.046). Yotuel was more effective than water (p = 0.023). The research community, the

  4. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  5. Quantifying Pilot Visual Attention in Low Visibility Terminal Operations

    NASA Technical Reports Server (NTRS)

    Ellis, Kyle K.; Arthur, J. J.; Latorella, Kara A.; Kramer, Lynda J.; Shelton, Kevin J.; Norman, Robert M.; Prinzel, Lawrence J.

    2012-01-01

    Quantifying pilot visual behavior allows researchers to determine not only where a pilot is looking and when, but holds implications for specific behavioral tracking when these data are coupled with flight technical performance. Remote eye tracking systems have been integrated into simulators at NASA Langley with effectively no impact on the pilot environment. This paper discusses the installation and use of a remote eye tracking system. The data collection techniques from a complex human-in-the-loop (HITL) research experiment are discussed; especially, the data reduction algorithms and logic to transform raw eye tracking data into quantified visual behavior metrics, and analysis methods to interpret visual behavior. The findings suggest superior performance for Head-Up Display (HUD) and improved attentional behavior for Head-Down Display (HDD) implementations of Synthetic Vision System (SVS) technologies for low visibility terminal area operations. Keywords: eye tracking, flight deck, NextGen, human machine interface, aviation

  6. REVIEW OF MECHANISTIC UNDERSTANDING AND MODELING AND UNCERTAINTY ANALYSIS METHODS FOR PREDICTING CEMENTITIOUS BARRIER PERFORMANCE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langton, C.; Kosson, D.

    2009-11-30

    Cementitious barriers for nuclear applications are one of the primary controls for preventing or limiting radionuclide release into the environment. At the present time, performance and risk assessments do not fully incorporate the effectiveness of engineered barriers because the processes that influence performance are coupled and complicated. Better understanding the behavior of cementitious barriers is necessary to evaluate and improve the design of materials and structures used for radioactive waste containment, life extension of current nuclear facilities, and design of future nuclear facilities, including those needed for nuclear fuel storage and processing, nuclear power production and waste management. The focusmore » of the Cementitious Barriers Partnership (CBP) literature review is to document the current level of knowledge with respect to: (1) mechanisms and processes that directly influence the performance of cementitious materials (2) methodologies for modeling the performance of these mechanisms and processes and (3) approaches to addressing and quantifying uncertainties associated with performance predictions. This will serve as an important reference document for the professional community responsible for the design and performance assessment of cementitious materials in nuclear applications. This review also provides a multi-disciplinary foundation for identification, research, development and demonstration of improvements in conceptual understanding, measurements and performance modeling that would be lead to significant reductions in the uncertainties and improved confidence in the estimating the long-term performance of cementitious materials in nuclear applications. This report identifies: (1) technology gaps that may be filled by the CBP project and also (2) information and computational methods that are in currently being applied in related fields but have not yet been incorporated into performance assessments of cementitious barriers. The

  7. Three-Dimensional, Transgenic Cell Models to Quantify Space Genotoxic Effects

    NASA Technical Reports Server (NTRS)

    Gonda, S. R.; Sognier, M. A.; Wu, H.; Pingerelli, P. L.; Glickman, B. W.; Dawson, David L. (Technical Monitor)

    1999-01-01

    The space environment contains radiation and chemical agents known to be mutagenic and carcinogenic to humans. Additionally, microgravity is a complicating factor that may modify or synergize induced genotoxic effects. Most in vitro models fail to use human cells (making risk extrapolation to humans more difficult), overlook the dynamic effect of tissue intercellular interactions on genotoxic damage, and lack the sensitivity required to measure low-dose effects. Currently a need exists for a model test system that simulates cellular interactions present in tissue, and can be used to quantify genotoxic damage induced by low levels of radiation and chemicals, and extrapolate assessed risk to humans. A state-of-the-art, three-dimensional, multicellular tissue equivalent cell culture model will be presented. It consists of mammalian cells genetically engineered to contain multiple copies of defined target genes for genotoxic assessment,. NASA-designed bioreactors were used to coculture mammalian cells into spheroids, The cells used were human mammary epithelial cells (H184135) and Stratagene's (Austin, Texas) Big Blue(TM) Rat 2 lambda fibroblasts. The fibroblasts were genetically engineered to contain -a high-density target gene for mutagenesis (60 copies of lacl/LacZ per cell). Tissue equivalent spheroids were routinely produced by inoculation of 2 to 7 X 10(exp 5) fibroblasts with Cytodex 3 beads (150 micrometers in diameter). at a 20:1 cell:bead ratio, into 50-ml HARV bioreactors (Synthecon, Inc.). Fibroblasts were cultured for 5 days, an equivalent number of epithelial cells added, and the fibroblast/epithelial cell coculture continued for 21 days. Three-dimensional spheroids with diameters ranging from 400 to 600 micrometers were obtained. Histological and immunohistochemical Characterization revealed i) both cell types present in the spheroids, with fibroblasts located primarily in the center, surrounded by epithelial cells; ii) synthesis of extracellular matrix

  8. A novel approach to quantify cybersecurity for electric power systems

    NASA Astrophysics Data System (ADS)

    Kaster, Paul R., Jr.

    Electric Power grid cybersecurity is a topic gaining increased attention in academia, industry, and government circles, yet a method of quantifying and evaluating a system's security is not yet commonly accepted. In order to be useful, a quantification scheme must be able to accurately reflect the degree to which a system is secure, simply determine the level of security in a system using real-world values, model a wide variety of attacker capabilities, be useful for planning and evaluation, allow a system owner to publish information without compromising the security of the system, and compare relative levels of security between systems. Published attempts at quantifying cybersecurity fail at one or more of these criteria. This document proposes a new method of quantifying cybersecurity that meets those objectives. This dissertation evaluates the current state of cybersecurity research, discusses the criteria mentioned previously, proposes a new quantification scheme, presents an innovative method of modeling cyber attacks, demonstrates that the proposed quantification methodology meets the evaluation criteria, and proposes a line of research for future efforts.

  9. Quantifying the Shape of Aging

    PubMed Central

    Wrycza, Tomasz F.; Missov, Trifon I.; Baudisch, Annette

    2015-01-01

    In Biodemography, aging is typically measured and compared based on aging rates. We argue that this approach may be misleading, because it confounds the time aspect with the mere change aspect of aging. To disentangle these aspects, here we utilize a time-standardized framework and, instead of aging rates, suggest the shape of aging as a novel and valuable alternative concept for comparative aging research. The concept of shape captures the direction and degree of change in the force of mortality over age, which—on a demographic level—reflects aging. We 1) provide a list of shape properties that are desirable from a theoretical perspective, 2) suggest several demographically meaningful and non-parametric candidate measures to quantify shape, and 3) evaluate performance of these measures based on the list of properties as well as based on an illustrative analysis of a simple dataset. The shape measures suggested here aim to provide a general means to classify aging patterns independent of any particular mortality model and independent of any species-specific time-scale. Thereby they support systematic comparative aging research across different species or between populations of the same species under different conditions and constitute an extension of the toolbox available to comparative research in Biodemography. PMID:25803427

  10. Experimental modeling of the effect of hurricane wind forces on driving behavior and vehicle performance.

    PubMed

    Rodriguez, Jose M; Codjoe, Julius; Osman, Osama; Ishak, Sherif; Wolshon, Brian

    2015-01-01

    While traffic planning is important for developing a hurricane evacuation plan, vehicle performance on the roads during extreme weather conditions is critical to the success of the planning process. This novel study investigates the effect of gusty hurricane wind forces on the driving behavior and vehicle performance. The study explores how the parameters of a driving simulator could be modified to reproduce wind loadings experienced by three vehicle types (passenger car, ambulance, and bus) during gusty hurricane winds, through manipulation of appropriate software. Thirty participants were then tested on the modified driving simulator under five wind conditions (ranging from normal to hurricane category 4). The driving performance measures used were heading error and lateral displacement. The results showed that higher wind forces resulted in more varied and greater heading error and lateral displacement. The ambulance had the greatest heading errors and lateral displacements, which were attributed to its large lateral surface area and light weight. Two mathematical models were developed to estimate the heading error and lateral displacements for each of the vehicle types for a given change in lateral wind force. Through a questionnaire, participants felt the different characteristics while driving each vehicle type. The findings of this study demonstrate the valuable use of a driving simulator to model the behavior of different vehicle types and to develop mathematical models to estimate and quantify driving behavior and vehicle performance under hurricane wind conditions.

  11. Review and evaluation of performance measures for survival prediction models in external validation settings.

    PubMed

    Rahman, M Shafiqur; Ambler, Gareth; Choodari-Oskooei, Babak; Omar, Rumana Z

    2017-04-18

    When developing a prediction model for survival data it is essential to validate its performance in external validation settings using appropriate performance measures. Although a number of such measures have been proposed, there is only limited guidance regarding their use in the context of model validation. This paper reviewed and evaluated a wide range of performance measures to provide some guidelines for their use in practice. An extensive simulation study based on two clinical datasets was conducted to investigate the performance of the measures in external validation settings. Measures were selected from categories that assess the overall performance, discrimination and calibration of a survival prediction model. Some of these have been modified to allow their use with validation data, and a case study is provided to describe how these measures can be estimated in practice. The measures were evaluated with respect to their robustness to censoring and ease of interpretation. All measures are implemented, or are straightforward to implement, in statistical software. Most of the performance measures were reasonably robust to moderate levels of censoring. One exception was Harrell's concordance measure which tended to increase as censoring increased. We recommend that Uno's concordance measure is used to quantify concordance when there are moderate levels of censoring. Alternatively, Gönen and Heller's measure could be considered, especially if censoring is very high, but we suggest that the prediction model is re-calibrated first. We also recommend that Royston's D is routinely reported to assess discrimination since it has an appealing interpretation. The calibration slope is useful for both internal and external validation settings and recommended to report routinely. Our recommendation would be to use any of the predictive accuracy measures and provide the corresponding predictive accuracy curves. In addition, we recommend to investigate the characteristics

  12. Quantifying Bell nonlocality with the trace distance

    NASA Astrophysics Data System (ADS)

    Brito, S. G. A.; Amaral, B.; Chaves, R.

    2018-02-01

    Measurements performed on distant parts of an entangled quantum state can generate correlations incompatible with classical theories respecting the assumption of local causality. This is the phenomenon known as quantum nonlocality that, apart from its fundamental role, can also be put to practical use in applications such as cryptography and distributed computing. Clearly, developing ways of quantifying nonlocality is an important primitive in this scenario. Here, we propose to quantify the nonlocality of a given probability distribution via its trace distance to the set of classical correlations. We show that this measure is a monotone under the free operations of a resource theory and, furthermore, that it can be computed efficiently with a linear program. We put our framework to use in a variety of relevant Bell scenarios also comparing the trace distance to other standard measures in the literature.

  13. Diesel Emissions Quantifier (DEQ)

    EPA Pesticide Factsheets

    .The Diesel Emissions Quantifier (Quantifier) is an interactive tool to estimate emission reductions and cost effectiveness. Publications EPA-420-F-13-008a (420f13008a), EPA-420-B-10-035 (420b10023), EPA-420-B-10-034 (420b10034)

  14. Quantifying effects of humans and climate on groundwater resources of Hawaii through sharp-interface modeling

    NASA Astrophysics Data System (ADS)

    Rotzoll, K.; Izuka, S. K.; Nishikawa, T.; Fienen, M. N.; El-Kadi, A. I.

    2016-12-01

    Some of the volcanic-rock aquifers of the islands of Hawaii are substantially developed, leading to concerns related to the effects of groundwater withdrawals on saltwater intrusion and stream base-flow reduction. A numerical modeling analysis using recent available information (e.g., recharge, withdrawals, hydrogeologic framework, and conceptual models of groundwater flow) advances current understanding of groundwater flow and provides insight into the effects of human activity and climate change on Hawaii's water resources. Three island-wide groundwater-flow models (Kauai, Oahu, and Maui) were constructed using MODFLOW 2005 coupled with the Seawater-Intrusion Package (SWI2), which simulates the transition between saltwater and freshwater in the aquifer as a sharp interface. This approach allowed coarse vertical discretization (maximum of two layers) without ignoring the freshwater-saltwater system at the regional scale. Model construction (FloPy3), parameter estimation (PEST), and analysis of results were streamlined using Python scripts. Model simulations included pre-development (1870) and recent (average of 2001-10) scenarios for each island. Additionally, scenarios for future withdrawals and climate change were simulated for Oahu. We present our streamlined approach and results showing estimated effects of human activity on the groundwater resource by quantifying decline in water levels, rise of the freshwater-saltwater interface, and reduction in stream base flow. Water-resource managers can use this information to evaluate consequences of groundwater development that can constrain future groundwater availability.

  15. Fourier transform infrared spectroscopy to quantify collagen and elastin in an in vitro model of extracellular matrix degradation in aorta.

    PubMed

    Cheheltani, Rabee; McGoverin, Cushla M; Rao, Jayashree; Vorp, David A; Kiani, Mohammad F; Pleshko, Nancy

    2014-06-21

    Extracellular matrix (ECM) is a key component and regulator of many biological tissues including aorta. Several aortic pathologies are associated with significant changes in the composition of the matrix, especially in the content, quality and type of aortic structural proteins, collagen and elastin. The purpose of this study was to develop an infrared spectroscopic methodology that is comparable to biochemical assays to quantify collagen and elastin in aorta. Enzymatically degraded porcine aorta samples were used as a model of ECM degradation in abdominal aortic aneurysm (AAA). After enzymatic treatment, Fourier transform infrared (FTIR) spectra of the aortic tissue were acquired by an infrared fiber optic probe (IFOP) and FTIR imaging spectroscopy (FT-IRIS). Collagen and elastin content were quantified biochemically and partial least squares (PLS) models were developed to predict collagen and elastin content in aorta based on FTIR spectra. PLS models developed from FT-IRIS spectra were able to predict elastin and collagen content of the samples with strong correlations (RMSE of validation = 8.4% and 11.1% of the range respectively), and IFOP spectra were successfully used to predict elastin content (RMSE = 11.3% of the range). The PLS regression coefficients from the FT-IRIS models were used to map collagen and elastin in tissue sections of degraded porcine aortic tissue as well as a human AAA biopsy tissue, creating a similar map of each component compared to histology. These results support further application of FTIR spectroscopic techniques for evaluation of AAA tissues.

  16. Fourier Transform Infrared Spectroscopy to Quantify Collagen and Elastin in an In Vitro Model of Extracellular Matrix Degradation in Aorta

    PubMed Central

    Cheheltani, Rabee; McGoverin, Cushla M.; Rao, Jayashree; Vorp, David A.; Kiani, Mohammad F.; Pleshko, N.

    2014-01-01

    Extracellular matrix (ECM) is a key component and regulator of many biological tissues including aorta. Several aortic pathologies are associated with significant changes in the composition of the matrix, especially in the content, quality and type of aortic structural proteins, collagen and elastin. The purpose of this study was to develop an infrared spectroscopic methodology that is comparable to biochemical assays to quantify collagen and elastin in aorta. Enzymatically degraded porcine aorta samples were used as a model of ECM degradation in abdominal aortic aneurysm (AAA). After enzymatic treatment, Fourier transform infrared (FTIR) spectra of the aortic tissue were acquired by an infrared fiber optic probe (IFOP) and FTIR imaging spectroscopy (FT-IRIS). Collagen and elastin content were quantified biochemically and partial least squares (PLS) models were developed to predict collagen and elastin content in aorta based on FTIR spectra. PLS models developed from FT-IRIS spectra were able to predict elastin and collagen content of the samples with strong correlations (RMSE of validation = 8.4% and 11.1% of the range respectively), and IFOP spectra were successfully used to predict elastin content (RMSE = 11.3% of the range). The PLS regression coefficients from the FT-IRIS models were used to map collagen and elastin in tissue sections of degraded porcine aortic tissue as well as a human AAA biopsy tissue, creating a similar map of each component compared to histology. These results support further application of FTIR spectroscopic techniques for evaluation of AAA tissues. PMID:24761431

  17. Quantifying unpredictability: A multiple-model approach based on satellite imagery data from Mediterranean ponds

    PubMed Central

    García-Roger, Eduardo Moisés; Franch, Belen; Carmona, María José; Serra, Manuel

    2017-01-01

    Fluctuations in environmental parameters are increasingly being recognized as essential features of any habitat. The quantification of whether environmental fluctuations are prevalently predictable or unpredictable is remarkably relevant to understanding the evolutionary responses of organisms. However, when characterizing the relevant features of natural habitats, ecologists typically face two problems: (1) gathering long-term data and (2) handling the hard-won data. This paper takes advantage of the free access to long-term recordings of remote sensing data (27 years, Landsat TM/ETM+) to assess a set of environmental models for estimating environmental predictability. The case study included 20 Mediterranean saline ponds and lakes, and the focal variable was the water-surface area. This study first aimed to produce a method for accurately estimating the water-surface area from satellite images. Saline ponds can develop salt-crusted areas that make it difficult to distinguish between soil and water. This challenge was addressed using a novel pipeline that combines band ratio water indices and the short near-infrared band as a salt filter. The study then extracted the predictable and unpredictable components of variation in the water-surface area. Two different approaches, each showing variations in the parameters, were used to obtain the stochastic variation around a regular pattern with the objective of dissecting the effect of assumptions on predictability estimations. The first approach, which is based on Colwell’s predictability metrics, transforms the focal variable into a nominal one. The resulting discrete categories define the relevant variations in the water-surface area. In the second approach, we introduced General Additive Model (GAM) fitting as a new metric for quantifying predictability. Both approaches produced a wide range of predictability for the studied ponds. Some model assumptions–which are considered very different a priori–had minor

  18. Performability modeling with continuous accomplishment sets

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.

    1979-01-01

    A general modeling framework that permits the definition, formulation, and evaluation of performability is described. It is shown that performability relates directly to system effectiveness, and is a proper generalization of both performance and reliability. A hierarchical modeling scheme is used to formulate the capability function used to evaluate performability. The case in which performance variables take values in a continuous accomplishment set is treated explicitly.

  19. Rapid performance modeling and parameter regression of geodynamic models

    NASA Astrophysics Data System (ADS)

    Brown, J.; Duplyakin, D.

    2016-12-01

    Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.

  20. Performance and Architecture Lab Modeling Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-06-19

    Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program

  1. Quantifying temporal trends in fisheries abundance using Bayesian dynamic linear models: A case study of riverine Smallmouth Bass populations

    USGS Publications Warehouse

    Schall, Megan K.; Blazer, Vicki S.; Lorantas, Robert M.; Smith, Geoffrey; Mullican, John E.; Keplinger, Brandon J.; Wagner, Tyler

    2018-01-01

    Detecting temporal changes in fish abundance is an essential component of fisheries management. Because of the need to understand short‐term and nonlinear changes in fish abundance, traditional linear models may not provide adequate information for management decisions. This study highlights the utility of Bayesian dynamic linear models (DLMs) as a tool for quantifying temporal dynamics in fish abundance. To achieve this goal, we quantified temporal trends of Smallmouth Bass Micropterus dolomieu catch per effort (CPE) from rivers in the mid‐Atlantic states, and we calculated annual probabilities of decline from the posterior distributions of annual rates of change in CPE. We were interested in annual declines because of recent concerns about fish health in portions of the study area. In general, periods of decline were greatest within the Susquehanna River basin, Pennsylvania. The declines in CPE began in the late 1990s—prior to observations of fish health problems—and began to stabilize toward the end of the time series (2011). In contrast, many of the other rivers investigated did not have the same magnitude or duration of decline in CPE. Bayesian DLMs provide information about annual changes in abundance that can inform management and are easily communicated with managers and stakeholders.

  2. Biomechanical study using fuzzy systems to quantify collagen fiber recruitment and predict creep of the rabbit medial collateral ligament.

    PubMed

    Ali, A F; Taha, M M Reda; Thornton, G M; Shrive, N G; Frank, C B

    2005-06-01

    In normal daily activities, ligaments are subjected to repeated loads, and respond to this environment with creep and fatigue. While progressive recruitment of the collagen fibers is responsible for the toe region of the ligament stress-strain curve, recruitment also represents an elegant feature to help ligaments resist creep. The use of artificial intelligence techniques in computational modeling allows a large number of parameters and their interactions to be incorporated beyond the capacity of classical mathematical models. The objective of the work described here is to demonstrate a tool for modeling creep of the rabbit medial collateral ligament that can incorporate the different parameters while quantifying the effect of collagen fiber recruitment during creep. An intelligent algorithm was developed to predict ligament creep. The modeling is performed in two steps: first, the ill-defined fiber recruitment is quantified using the fuzzy logic. Second, this fiber recruitment is incorporated along with creep stress and creep time to model creep using an adaptive neurofuzzy inference system. The model was trained and tested using an experimental database including creep tests and crimp image analysis. The model confirms that quantification of fiber recruitment is important for accurate prediction of ligament creep behavior at physiological loads.

  3. Quantifying and visualizing site performance in clinical trials.

    PubMed

    Yang, Eric; O'Donovan, Christopher; Phillips, JodiLyn; Atkinson, Leone; Ghosh, Krishnendu; Agrafiotis, Dimitris K

    2018-03-01

    One of the keys to running a successful clinical trial is the selection of high quality clinical sites, i.e., sites that are able to enroll patients quickly, engage them on an ongoing basis to prevent drop-out, and execute the trial in strict accordance to the clinical protocol. Intuitively, the historical track record of a site is one of the strongest predictors of its future performance; however, issues such as data availability and wide differences in protocol complexity can complicate interpretation. Here, we demonstrate how operational data derived from central laboratory services can provide key insights into the performance of clinical sites and help guide operational planning and site selection for new clinical trials. Our methodology uses the metadata associated with laboratory kit shipments to clinical sites (such as trial and anonymized patient identifiers, investigator names and addresses, sample collection and shipment dates, etc.) to reconstruct the complete schedule of patient visits and derive insights about the operational performance of those sites, including screening, enrollment, and drop-out rates and other quality indicators. This information can be displayed in its raw form or normalized to enable direct comparison of site performance across studies of varied design and complexity. Leveraging Covance's market leadership in central laboratory services, we have assembled a database of operational metrics that spans more than 14,000 protocols, 1400 indications, 230,000 unique investigators, and 23 million patient visits and represents a significant fraction of all clinical trials run globally in the last few years. By analyzing this historical data, we are able to assess and compare the performance of clinical investigators across a wide range of therapeutic areas and study designs. This information can be aggregated across trials and geographies to gain further insights into country and regional trends, sometimes with surprising results. The

  4. Quantifying the physical demands of a musical performance and their effects on performance quality.

    PubMed

    Drinkwater, Eric J; Klopper, Christopher J

    2010-06-01

    This study investigated the effects of fatigue on performance quality induced by a prolonged musical performance. Ten participants prepared 10 min of repertoire for their chosen wind instrument that they played three times consecutively. Prior to the performance and within short breaks between performances, researchers collected heart rate, respiratory rate, blood pressure, blood lactate concentration, rating of perceived exertion (RPE), and rating of anxiety. All performances were audio recorded and later analysed for performance errors. Reliability in assessing performance errors was assessed by typical error of measure (TEM) of 15 repeat performances. Results indicate all markers of physical stress significantly increased by a moderate to large amount (4.6 to 62.2%; d = 0.50 to 1.54) once the performance began, while heart rate, respirations, and RPE continued to rise by a small to large amount (4.9 to 23.5%; d = 0.28 to 0.93) with each performance. Observed changes in performance between performances were well in excess of the TEM of 7.4%. There was a significant small (21%, d = 0.43) decrease in errors after the first performance; after the second performance, there was a significant large increase (70.4%, d = 1.14). The initial increase in physiological stress with corresponding decrease in errors after the first performance likely indicates "warming up," while the continued increase in markers of physical stress with dramatic decrement in performance quality likely indicates fatigue. Musicians may consider the relevance of physical fitness to maintaining performance quality over the duration of a performance.

  5. Summary of the key features of seven biomathematical models of human fatigue and performance.

    PubMed

    Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F

    2004-03-01

    Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers

  6. Summary of the key features of seven biomathematical models of human fatigue and performance

    NASA Technical Reports Server (NTRS)

    Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.

    2004-01-01

    BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers

  7. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less

  8. The Systematics of Strong Lens Modeling Quantified: The Effects of Constraint Selection and Redshift Information on Magnification, Mass, and Multiple Image Predictability

    NASA Astrophysics Data System (ADS)

    Johnson, Traci L.; Sharon, Keren

    2016-11-01

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  9. Quantifying agricultural drought impacts using soil moisture model and drought indices in South Korea

    NASA Astrophysics Data System (ADS)

    Nam, W. H.; Bang, N.; Hong, E. M.; Pachepsky, Y. A.; Han, K. H.; Cho, H.; Ok, J.; Hong, S. Y.

    2017-12-01

    Agricultural drought is defined as a combination of abnormal deficiency of precipitation, increased crop evapotranspiration demands from high-temperature anomalies, and soil moisture deficits during the crop growth period. Soil moisture variability and their spatio-temporal trends is a key component of the hydrological balance, which determines the crop production and drought stresses in the context of agriculture. In 2017, South Korea has identified the extreme drought event, the worst in one hundred years according to the South Korean government. The objective of this study is to quantify agricultural drought impacts using observed and simulated soil moisture, and various drought indices. A soil water balance model is used to simulate the soil water content in the crop root zone under rain-fed (no irrigation) conditions. The model used includes physical process using estimated effective rainfall, infiltration, redistribution in soil water zone, and plant water uptake in the form of actual crop evapotranspiration. Three widely used drought indices, including the Standardized Precipitation Index (SPI), the Standardized Precipitation Evapotranspiration Index (SPEI), and the Self-Calibrated Palmer Drought Severity Index (SC-PDSI) are compared with the observed and simulated soil moisture in the context of agricultural drought impacts. These results demonstrated that the soil moisture model could be an effective tool to provide improved spatial and temporal drought monitoring for drought policy.

  10. Nonparametric Stochastic Model for Uncertainty Quantifi cation of Short-term Wind Speed Forecasts

    NASA Astrophysics Data System (ADS)

    AL-Shehhi, A. M.; Chaouch, M.; Ouarda, T.

    2014-12-01

    Wind energy is increasing in importance as a renewable energy source due to its potential role in reducing carbon emissions. It is a safe, clean, and inexhaustible source of energy. The amount of wind energy generated by wind turbines is closely related to the wind speed. Wind speed forecasting plays a vital role in the wind energy sector in terms of wind turbine optimal operation, wind energy dispatch and scheduling, efficient energy harvesting etc. It is also considered during planning, design, and assessment of any proposed wind project. Therefore, accurate prediction of wind speed carries a particular importance and plays significant roles in the wind industry. Many methods have been proposed in the literature for short-term wind speed forecasting. These methods are usually based on modeling historical fixed time intervals of the wind speed data and using it for future prediction. The methods mainly include statistical models such as ARMA, ARIMA model, physical models for instance numerical weather prediction and artificial Intelligence techniques for example support vector machine and neural networks. In this paper, we are interested in estimating hourly wind speed measures in United Arab Emirates (UAE). More precisely, we predict hourly wind speed using a nonparametric kernel estimation of the regression and volatility functions pertaining to nonlinear autoregressive model with ARCH model, which includes unknown nonlinear regression function and volatility function already discussed in the literature. The unknown nonlinear regression function describe the dependence between the value of the wind speed at time t and its historical data at time t -1, t - 2, … , t - d. This function plays a key role to predict hourly wind speed process. The volatility function, i.e., the conditional variance given the past, measures the risk associated to this prediction. Since the regression and the volatility functions are supposed to be unknown, they are estimated using

  11. Quantifying the Uncertainty in Estimates of Surface Atmosphere Fluxes by Evaluation of SEBS and SCOPE Models

    NASA Astrophysics Data System (ADS)

    Timmermans, J.; van der Tol, C.; Verhoef, A.; Wang, L.; van Helvoirt, M.; Verhoef, W.; Su, Z.

    2009-11-01

    An earth observation based evapotranspiration (ET) product is essential to achieving the GEWEX CEOP science objectives and to achieve the GEOSS water resources societal benefit areas. Conventional techniques that employ point measurements to estimate the components of the energy balance are only representative for local scales and cannot be extended to large areas because of the heterogeneity of the land surface and the dynamic nature of heat transfer processes.The objective of this research is to quantify the uncertainties of evapotranspiration estimates by the Surface Energy Balance System (SEBS) algorithm through validation against the detailed Soil Canopy Observation, Photochemistry and Energy fluxes process (SCOPE) model with site optimized parameters. This SCOPE model takes both radiative processes and biochemical processes into account; it combines the SAIL radiative transfer model with the energy balance at leaf level to simulate the interaction between surface and atmosphere. In this paper the validation results are presented for a semi long term dataset in Reading on 2002.The comparison between the two models showed a high correlation over the complete growth of maize capturing the daily variation to good extent. The absolute values of the SEBS model are however much lower compared to those of the SCOPE model. This is due to the fact the SEBS model uses a surface resistance parameterization that is unable to account of high vegetation. An update of the SEBS model will resolve this problem.

  12. Performance improvement: the organization's quest.

    PubMed

    McKinley, C O; Parmer, D E; Saint-Amand, R A; Harbin, C B; Roulston, J C; Ellis, R A; Buchanan, J R; Leonard, R B

    1999-01-01

    In today's health care marketplace, quality has become an expectation. Stakeholders are demanding quality clinical outcomes, and accrediting bodies are requiring clinical performance data. The Roosevelt Institute's quest was to define and quantify quality outcomes, develop an organizational culture of performance improvement, and ensure customer satisfaction. Several of the organization's leaders volunteered to work as a team to develop a specific performance improvement approach tailored to the organization. To date, over 200 employees have received an orientation to the model and its philosophy and nine problem action and process improvement teams have been formed.

  13. The Fallacy of Quantifying Risk

    DTIC Science & Technology

    2012-09-01

    Defense AT&L: September–October 2012 18 The Fallacy of Quantifying Risk David E. Frick, Ph.D. Frick is a 35-year veteran of the Department of...a key to risk analysis was “choosing the right technique” of quantifying risk . The weakness in this argument stems not from the assertion that one...of information about the enemy), yet achiev- ing great outcomes. Attempts at quantifying risk are not, in and of themselves, objectionable. Prudence

  14. Quantifying hypoxia in human cancers using static PET imaging.

    PubMed

    Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G; Milosevic, Michael; Hedley, David W; Jaffray, David A

    2016-11-21

    Compared to FDG, the signal of 18 F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties-well-perfused without substantial necrosis or partitioning-for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in 'inter-corporal' transport properties-blood volume and clearance rate-as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3 , a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.

  15. Quantifying hypoxia in human cancers using static PET imaging

    NASA Astrophysics Data System (ADS)

    Taylor, Edward; Yeung, Ivan; Keller, Harald; Wouters, Bradley G.; Milosevic, Michael; Hedley, David W.; Jaffray, David A.

    2016-11-01

    Compared to FDG, the signal of 18F-labelled hypoxia-sensitive tracers in tumours is low. This means that in addition to the presence of hypoxic cells, transport properties contribute significantly to the uptake signal in static PET images. This sensitivity to transport must be minimized in order for static PET to provide a reliable standard for hypoxia quantification. A dynamic compartmental model based on a reaction-diffusion formalism was developed to interpret tracer pharmacokinetics and applied to static images of FAZA in twenty patients with pancreatic cancer. We use our model to identify tumour properties—well-perfused without substantial necrosis or partitioning—for which static PET images can reliably quantify hypoxia. Normalizing the measured activity in a tumour voxel by the value in blood leads to a reduction in the sensitivity to variations in ‘inter-corporal’ transport properties—blood volume and clearance rate—as well as imaging study protocols. Normalization thus enhances the correlation between static PET images and the FAZA binding rate K 3, a quantity which quantifies hypoxia in a biologically significant way. The ratio of FAZA uptake in spinal muscle and blood can vary substantially across patients due to long muscle equilibration times. Normalized static PET images of hypoxia-sensitive tracers can reliably quantify hypoxia for homogeneously well-perfused tumours with minimal tissue partitioning. The ideal normalizing reference tissue is blood, either drawn from the patient before PET scanning or imaged using PET. If blood is not available, uniform, homogeneously well-perfused muscle can be used. For tumours that are not homogeneously well-perfused or for which partitioning is significant, only an analysis of dynamic PET scans can reliably quantify hypoxia.

  16. Development of X-ray micro-focus computed tomography to image and quantify biofilms in central venous catheter models in vitro.

    PubMed

    Niehaus, Wilmari L; Howlin, Robert P; Johnston, David A; Bull, Daniel J; Jones, Gareth L; Calton, Elizabeth; Mavrogordato, Mark N; Clarke, Stuart C; Thurner, Philipp J; Faust, Saul N; Stoodley, Paul

    2016-09-01

    Bacterial infections of central venous catheters (CVCs) cause much morbidity and mortality, and are usually diagnosed by concordant culture of blood and catheter tip. However, studies suggest that culture often fails to detect biofilm bacteria. This study optimizes X-ray micro-focus computed tomography (X-ray µCT) for the quantification and determination of distribution and heterogeneity of biofilms in in vitro CVC model systems.Bacterial culture and scanning electron microscopy (SEM) were used to detect Staphylococcus epidermidis ATCC 35984 biofilms grown on catheters in vitro in both flow and static biofilm models. Alongside this, X-ray µCT techniques were developed in order to detect biofilms inside CVCs. Various contrast agent stains were evaluated using energy-dispersive X-ray spectroscopy (EDS) to further optimize these methods. Catheter material and biofilm were segmented using a semi-automated matlab script and quantified using the Avizo Fire software package. X-ray µCT was capable of distinguishing between the degree of biofilm formation across different segments of a CVC flow model. EDS screening of single- and dual-compound contrast stains identified 10 nm gold and silver nitrate as the optimum contrast agent for X-ray µCT. This optimized method was then demonstrated to be capable of quantifying biofilms in an in vitro static biofilm formation model, with a strong correlation between biofilm detection via SEM and culture. X-ray µCT has good potential as a direct, non-invasive, non-destructive technology to image biofilms in CVCs, as well as other in vivo medical components in which biofilms accumulate in concealed areas.

  17. Assessment and Optimization of the Accuracy of an Aircraft-Based Technique Used to Quantify Greenhouse Gas Emission Rates from Point Sources

    NASA Astrophysics Data System (ADS)

    Shepson, P. B.; Lavoie, T. N.; Kerlo, A. E.; Stirm, B. H.

    2016-12-01

    Understanding the contribution of anthropogenic activities to atmospheric greenhouse gas concentrations requires an accurate characterization of emission sources. Previously, we have reported the use of a novel aircraft-based mass balance measurement technique to quantify greenhouse gas emission rates from point and area sources, however, the accuracy of this approach has not been evaluated to date. Here, an assessment of method accuracy and precision was performed by conducting a series of six aircraft-based mass balance experiments at a power plant in southern Indiana and comparing the calculated CO2 emission rates to the reported hourly emission measurements made by continuous emissions monitoring systems (CEMS) installed directly in the exhaust stacks at the facility. For all flights, CO2 emissions were quantified before CEMS data were released online to ensure unbiased analysis. Additionally, we assess the uncertainties introduced to the final emission rate caused by our analysis method, which employs a statistical kriging model to interpolate and extrapolate the CO2 fluxes across the flight transects from the ground to the top of the boundary layer. Subsequently, using the results from these flights combined with the known emissions reported by the CEMS, we perform an inter-model comparison of alternative kriging methods to evaluate the performance of the kriging approach.

  18. Challenges in leveraging existing human performance data for quantifying the IDHEAS HRA method

    DOE PAGES

    Liao, Huafei N.; Groth, Katrina; Stevens-Adams, Susan

    2015-07-29

    Our article documents an exploratory study for collecting and using human performance data to inform human error probability (HEP) estimates for a new human reliability analysis (HRA) method, the IntegrateD Human Event Analysis System (IDHEAS). The method was based on cognitive models and mechanisms underlying human behaviour and employs a framework of 14 crew failure modes (CFMs) to represent human failures typical for human performance in nuclear power plant (NPP) internal, at-power events [1]. A decision tree (DT) was constructed for each CFM to assess the probability of the CFM occurring in different contexts. Data needs for IDHEAS quantification aremore » discussed. Then, the data collection framework and process is described and how the collected data were used to inform HEP estimation is illustrated with two examples. Next, five major technical challenges are identified for leveraging human performance data for IDHEAS quantification. Furthermore, these challenges reflect the data needs specific to IDHEAS. More importantly, they also represent the general issues with current human performance data and can provide insight for a path forward to support HRA data collection, use, and exchange for HRA method development, implementation, and validation.« less

  19. [Computer-assisted image processing for quantifying histopathologic variables in the healing of colonic anastomosis in dogs].

    PubMed

    Novelli, M D; Barreto, E; Matos, D; Saad, S S; Borra, R C

    1997-01-01

    The authors present the experimental results of the computerized quantifying of tissular structures involved in the reparative process of colonic anastomosis performed by manual suture and biofragmentable ring. The quantified variables in this study were: oedema fluid, myofiber tissue, blood vessel and cellular nuclei. An image processing software developed at Laboratório de Informática Dedicado à Odontologia (LIDO) was utilized to quantifying the pathognomonic alterations in the inflammatory process in colonic anastomosis performed in 14 dogs. The results were compared to those obtained through traditional way diagnosis by two pathologists in view of counterproof measures. The criteria for these diagnoses were defined in levels represented by absent, light, moderate and intensive which were compared to analysis performed by the computer. There was significant statistical difference between two techniques: the biofragmentable ring technique exhibited low oedema fluid, organized myofiber tissue and higher number of alongated cellular nuclei in relation to manual suture technique. The analysis of histometric variables through computational image processing was considered efficient and powerful to quantify the main tissular inflammatory and reparative changing.

  20. Basinsoft, a computer program to quantify drainage basin characteristics

    USGS Publications Warehouse

    Harvey, Craig A.; Eash, David A.

    2001-01-01

    In 1988, the USGS began developing a program called Basinsoft. The initial program quantified 16 selected drainage basin characteristics from three source-data layers that were manually digitized from topographic maps using the versions of ARC/INFO, Fortran programs, and prime system Command Programming Language (CPL) programs available in 1988 (Majure and Soenksen, 1991). By 1991, Basinsoft was enhanced to quantify 27 selected drainage-basin characteristics from three source-data layers automatically generated from digital elevation model (DEM) data using a set of Fortran programs (Majure and Eash, 1991: Jenson and Dominique, 1988). Due to edge-matching problems encountered in 1991 with the preprocessing

  1. Quantifying kinematics of purposeful movements to real, imagined, or absent functional objects: implications for modelling trajectories for robot-assisted ADL tasks.

    PubMed

    Wisneski, Kimberly J; Johnson, Michelle J

    2007-03-23

    Robotic therapy is at the forefront of stroke rehabilitation. The Activities of Daily Living Exercise Robot (ADLER) was developed to improve carryover of gains after training by combining the benefits of Activities of Daily Living (ADL) training (motivation and functional task practice with real objects), with the benefits of robot mediated therapy (repeatability and reliability). In combining these two therapy techniques, we seek to develop a new model for trajectory generation that will support functional movements to real objects during robot training. We studied natural movements to real objects and report on how initial reaching movements are affected by real objects and how these movements deviate from the straight line paths predicted by the minimum jerk model, typically used to generate trajectories in robot training environments. We highlight key issues that to be considered in modelling natural trajectories. Movement data was collected as eight normal subjects completed ADLs such as drinking and eating. Three conditions were considered: object absent, imagined, and present. This data was compared to predicted trajectories generated from implementing the minimum jerk model. The deviations in both the plane of the table (XY) and the sagittal plane of torso (XZ) were examined for both reaches to a cup and to a spoon. Velocity profiles and curvature were also quantified for all trajectories. We hypothesized that movements performed with functional task constraints and objects would deviate from the minimum jerk trajectory model more than those performed under imaginary or object absent conditions. Trajectory deviations from the predicted minimum jerk model for these reaches were shown to depend on three variables: object presence, object orientation, and plane of movement. When subjects completed the cup reach their movements were more curved than for the spoon reach. The object present condition for the cup reach showed more curvature than in the object

  2. Quantifying natural delta variability using a multiple-point geostatistics prior uncertainty model

    NASA Astrophysics Data System (ADS)

    Scheidt, Céline; Fernandes, Anjali M.; Paola, Chris; Caers, Jef

    2016-10-01

    We address the question of quantifying uncertainty associated with autogenic pattern variability in a channelized transport system by means of a modern geostatistical method. This question has considerable relevance for practical subsurface applications as well, particularly those related to uncertainty quantification relying on Bayesian approaches. Specifically, we show how the autogenic variability in a laboratory experiment can be represented and reproduced by a multiple-point geostatistical prior uncertainty model. The latter geostatistical method requires selection of a limited set of training images from which a possibly infinite set of geostatistical model realizations, mimicking the training image patterns, can be generated. To that end, we investigate two methods to determine how many training images and what training images should be provided to reproduce natural autogenic variability. The first method relies on distance-based clustering of overhead snapshots of the experiment; the second method relies on a rate of change quantification by means of a computer vision algorithm termed the demon algorithm. We show quantitatively that with either training image selection method, we can statistically reproduce the natural variability of the delta formed in the experiment. In addition, we study the nature of the patterns represented in the set of training images as a representation of the "eigenpatterns" of the natural system. The eigenpattern in the training image sets display patterns consistent with previous physical interpretations of the fundamental modes of this type of delta system: a highly channelized, incisional mode; a poorly channelized, depositional mode; and an intermediate mode between the two.

  3. Predictive performance models and multiple task performance

    NASA Technical Reports Server (NTRS)

    Wickens, Christopher D.; Larish, Inge; Contorer, Aaron

    1989-01-01

    Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.

  4. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  5. Investigating dye performance and crosstalk in fluorescence enabled bioimaging using a model system

    PubMed Central

    Arppe, Riikka; Carro-Temboury, Miguel R.; Hempel, Casper; Vosch, Tom

    2017-01-01

    Detailed imaging of biological structures, often smaller than the diffraction limit, is possible in fluorescence microscopy due to the molecular size and photophysical properties of fluorescent probes. Advances in hardware and multiple providers of high-end bioimaging makes comparing images between studies and between research groups very difficult. Therefore, we suggest a model system to benchmark instrumentation, methods and staining procedures. The system we introduce is based on doped zeolites in stained polyvinyl alcohol (PVA) films: a highly accessible model system which has the properties needed to act as a benchmark in bioimaging experiments. Rather than comparing molecular probes and imaging methods in complicated biological systems, we demonstrate that the model system can emulate this complexity and can be used to probe the effect of concentration, brightness, and cross-talk of fluorophores on the detected fluorescence signal. The described model system comprises of lanthanide (III) ion doped Linde Type A zeolites dispersed in a PVA film stained with fluorophores. We tested: F18, MitoTracker Red and ATTO647N. This model system allowed comparing performance of the fluorophores in experimental conditions. Importantly, we here report considerable cross-talk of the dyes when exchanging excitation and emission settings. Additionally, bleaching was quantified. The proposed model makes it possible to test and benchmark staining procedures before these dyes are applied to more complex biological systems. PMID:29176775

  6. Performance modeling for large database systems

    NASA Astrophysics Data System (ADS)

    Schaar, Stephen; Hum, Frank; Romano, Joe

    1997-02-01

    One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.

  7. Quantifying T Lymphocyte Turnover

    PubMed Central

    De Boer, Rob J.; Perelson, Alan S.

    2013-01-01

    Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2′-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men. PMID:23313150

  8. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, decision-support tools, and models to be applied to media-specific or receptor-specific problem areas. CED uses modeling-based approaches to characterize exposures, evaluate fate and transport, and support environmental diagnostics/forensics with input from multiple data sources. It also develops media- and receptor-specific models, process models, and decision support tools for use both within and outside of EPA.

  9. Quantifying effects of humans and climate on groundwater resources through modeling of volcanic-rock aquifers of Hawaii

    NASA Astrophysics Data System (ADS)

    Rotzoll, K.; Izuka, S. K.; Nishikawa, T.; Fienen, M. N.; El-Kadi, A. I.

    2015-12-01

    The volcanic-rock aquifers of Kauai, Oahu, and Maui are heavily developed, leading to concerns related to the effects of groundwater withdrawals on saltwater intrusion and streamflow. A numerical modeling analysis using the most recently available data (e.g., information on recharge, withdrawals, hydrogeologic framework, and conceptual models of groundwater flow) will substantially advance current understanding of groundwater flow and provide insight into the effects of human activity and climate change on Hawaii's water resources. Three island-wide groundwater-flow models were constructed using MODFLOW 2005 coupled with the Seawater-Intrusion Package (SWI2), which simulates the transition between saltwater and freshwater in the aquifer as a sharp interface. This approach allowed relatively fast model run times without ignoring the freshwater-saltwater system at the regional scale. Model construction (FloPy3), automated-parameter estimation (PEST), and analysis of results were streamlined using Python scripts. Model simulations included pre-development (1870) and current (average of 2001-10) scenarios for each island. Additionally, scenarios for future withdrawals and climate change were simulated for Oahu. We present our streamlined approach and preliminary results showing estimated effects of human activity on the groundwater resource by quantifying decline in water levels, reduction in stream base flow, and rise of the freshwater-saltwater interface.

  10. Engineering Students Designing a Statistical Procedure for Quantifying Variability

    ERIC Educational Resources Information Center

    Hjalmarson, Margret A.

    2007-01-01

    The study examined first-year engineering students' responses to a statistics task that asked them to generate a procedure for quantifying variability in a data set from an engineering context. Teams used technological tools to perform computations, and their final product was a ranking procedure. The students could use any statistical measures,…

  11. Contactless ultrasonic energy transfer for wireless systems: acoustic-piezoelectric structure interaction modeling and performance enhancement

    NASA Astrophysics Data System (ADS)

    Shahab, S.; Erturk, A.

    2014-12-01

    There are several applications of wireless electronic components with little or no ambient energy available to harvest, yet wireless battery charging for such systems is still of great interest. Example applications range from biomedical implants to sensors located in hazardous environments. Energy transfer based on the propagation of acoustic waves at ultrasonic frequencies is a recently explored alternative that offers increased transmitter-receiver distance, reduced loss and the elimination of electromagnetic fields. As this research area receives growing attention, there is an increased need for fully coupled model development to quantify the energy transfer characteristics, with a focus on the transmitter, receiver, medium, geometric and material parameters. We present multiphysics modeling and case studies of the contactless ultrasonic energy transfer for wireless electronic components submerged in fluid. The source is a pulsating sphere, and the receiver is a piezoelectric bar operating in the 33-mode of piezoelectricity with a fundamental resonance frequency above the audible frequency range. The goal is to quantify the electrical power delivered to the load (connected to the receiver) in terms of the source strength. Both the analytical and finite element models have been developed for the resulting acoustic-piezoelectric structure interaction problem. Resistive and resistive-inductive electrical loading cases are presented, and optimality conditions are discussed. Broadband power transfer is achieved by optimal resistive-reactive load tuning for performance enhancement and frequency-wise robustness. Significant enhancement of the power output is reported due to the use of a hard piezoelectric receiver (PZT-8) instead of a soft counterpart (PZT-5H) as a result of reduced material damping. The analytical multiphysics modeling approach given in this work can be used to predict and optimize the coupled system dynamics with very good accuracy and dramatically

  12. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald L.; Goering, Timothy James; Miller, Mark Laverne

    2005-11-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses. At least one-hundred realizations were simulated for each scenario defined in the performance assessment. Conservative values and assumptions were used to define values and distributions of uncertain input parameters when site data were not available. Results showed that exposure to tritium via the air pathway exceeded the regulatory metric of 10 mrem/year in about 2% of the simulated realizations when the receptor was located at the MWL (continuously exposed to the air directly above the MWL). Simulations showed that peak radon gas fluxes exceeded the design standard of 20 pCi/m{sup 2}/s in about 3% of the realizations if up to 1% of the containers of sealed radium-226 sources were assumed to completely degrade in the future. If up to 100% of the containers of radium-226 sources were assumed to completely degrade, 30% of the realizations yielded radon surface fluxes that exceeded the design standard. For the groundwater pathway, simulations showed that none of the radionuclides or heavy metals (lead and cadmium) reached the groundwater

  13. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Science Inventory

    This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit m...

  14. Quantifying Stock Return Distributions in Financial Markets.

    PubMed

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales.

  15. Harnessing the theoretical foundations of the exponential and beta-Poisson dose-response models to quantify parameter uncertainty using Markov Chain Monte Carlo.

    PubMed

    Schmidt, Philip J; Pintar, Katarina D M; Fazil, Aamir M; Topp, Edward

    2013-09-01

    Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility. © Her Majesty the Queen in Right of Canada 2013. Reproduced with the permission of the Minister of the Public Health Agency of Canada.

  16. Reference Manual for the System Advisor Model's Wind Power Performance Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freeman, J.; Jorgenson, J.; Gilman, P.

    2014-08-01

    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface andmore » as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.« less

  17. Performance monitoring can boost turboexpander efficiency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McIntire, R.

    1982-07-05

    Focuses on the turboexpander/refrigeration system's radial expander and radial compressor. Explains that radial expander efficiency depends on mass flow rate, inlet pressure, inlet temperature, discharge pressure, gas composition, and shaft speed. Discusses quantifying the performance of the separate components over a range of operating conditions; estimating the increase in performance associated with any hardware change; and developing an analytical (computer) model of the entire system by using the performance curve of individual components. Emphasizes antisurge control and modifying Q/N (flow rate/ shaft speed).

  18. Quantifying Improved Visual Performance Through Vision Training

    DTIC Science & Technology

    1991-02-22

    Eibschitz, N., Friedman, Z. and Neuman, E. (1978) Comparative results of amblyopia treatment . Metab Opthalmol, 2, 111-112. Evans, D.W. and Ginsburg, A... treatment . Am Orthopt J, 5, 61-64. Garzia, R.P. (1987) The efficacy of visual training in amblyopia : A literature review. Am J Optom Physiol Opt, 64, 393...predicts pilots’ performance in aircraft simulators. Am. J. Opt. Physiol. Opt., 59(1), 105-109. Gortz, H. (1960) The corrective treatment of amblyopia

  19. Work domain constraints for modelling surgical performance.

    PubMed

    Morineau, Thierry; Riffaud, Laurent; Morandi, Xavier; Villain, Jonathan; Jannin, Pierre

    2015-10-01

    Three main approaches can be identified for modelling surgical performance: a competency-based approach, a task-based approach, both largely explored in the literature, and a less known work domain-based approach. The work domain-based approach first describes the work domain properties that constrain the agent's actions and shape the performance. This paper presents a work domain-based approach for modelling performance during cervical spine surgery, based on the idea that anatomical structures delineate the surgical performance. This model was evaluated through an analysis of junior and senior surgeons' actions. Twenty-four cervical spine surgeries performed by two junior and two senior surgeons were recorded in real time by an expert surgeon. According to a work domain-based model describing an optimal progression through anatomical structures, the degree of adjustment of each surgical procedure to a statistical polynomial function was assessed. Each surgical procedure showed a significant suitability with the model and regression coefficient values around 0.9. However, the surgeries performed by senior surgeons fitted this model significantly better than those performed by junior surgeons. Analysis of the relative frequencies of actions on anatomical structures showed that some specific anatomical structures discriminate senior from junior performances. The work domain-based modelling approach can provide an overall statistical indicator of surgical performance, but in particular, it can highlight specific points of interest among anatomical structures that the surgeons dwelled on according to their level of expertise.

  20. U1108 performance model

    NASA Technical Reports Server (NTRS)

    Trachta, G.

    1976-01-01

    A model of Univac 1108 work flow has been developed to assist in performance evaluation studies and configuration planning. Workload profiles and system configurations are parameterized for ease of experimental modification. Outputs include capacity estimates and performance evaluation functions. The U1108 system is conceptualized as a service network; classical queueing theory is used to evaluate network dynamics.

  1. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  2. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  3. Model Performance Evaluation and Scenario Analysis (MPESA)

    EPA Pesticide Factsheets

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  4. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  5. Quantifying the uncertainty introduced by discretization and time-averaging in two-fluid model predictions

    DOE PAGES

    Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane

    2017-07-12

    The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less

  6. Virginia Higher Education Performance Funding Model.

    ERIC Educational Resources Information Center

    Virginia State Council of Higher Education, Richmond.

    This report reviews the proposed Virginia Higher Education Performance Funding Model. It includes an overview of the proposed funding model, examples of likely funding scenarios (including determination of block grants, assumptions underlying performance funding for four-year and two-year institutions); information on deregulation/decentralization…

  7. Evaluating simulated functional trait patterns and quantifying modelled trait diversity effects on simulated ecosystem fluxes

    NASA Astrophysics Data System (ADS)

    Pavlick, R.; Schimel, D.

    2014-12-01

    Dynamic Global Vegetation Models (DGVMs) typically employ only a small set of Plant Functional Types (PFTs) to represent the vast diversity of observed vegetation forms and functioning. There is growing evidence, however, that this abstraction may not adequately represent the observed variation in plant functional traits, which is thought to play an important role for many ecosystem functions and for ecosystem resilience to environmental change. The geographic distribution of PFTs in these models is also often based on empirical relationships between present-day climate and vegetation patterns. Projections of future climate change, however, point toward the possibility of novel regional climates, which could lead to no-analog vegetation compositions incompatible with the PFT paradigm. Here, we present results from the Jena Diversity-DGVM (JeDi-DGVM), a novel traits-based vegetation model, which simulates a large number of hypothetical plant growth strategies constrained by functional tradeoffs, thereby allowing for a more flexible temporal and spatial representation of the terrestrial biosphere. First, we compare simulated present-day geographical patterns of functional traits with empirical trait observations (in-situ and from airborne imaging spectroscopy). The observed trait patterns are then used to improve the tradeoff parameterizations of JeDi-DGVM. Finally, focusing primarily on the simulated leaf traits, we run the model with various amounts of trait diversity. We quantify the effects of these modeled biodiversity manipulations on simulated ecosystem fluxes and stocks for both present-day conditions and transient climate change scenarios. The simulation results reveal that the coarse treatment of plant functional traits by current PFT-based vegetation models may contribute substantial uncertainty regarding carbon-climate feedbacks. Further development of trait-based models and further investment in global in-situ and spectroscopic plant trait observations

  8. Quantifying gait patterns in Parkinson's disease

    NASA Astrophysics Data System (ADS)

    Romero, Mónica; Atehortúa, Angélica; Romero, Eduardo

    2017-11-01

    Parkinson's disease (PD) is constituted by a set of motor symptoms, namely tremor, rigidity, and bradykinesia, which are usually described but not quantified. This work proposes an objective characterization of PD gait patterns by approximating the single stance phase a single grounded pendulum. This model estimates the force generated by the gait during the single support from gait data. This force describes the motion pattern for different stages of the disease. The model was validated using recorded videos of 8 young control subjects, 10 old control subjects and 10 subjects with Parkinson's disease in different stages. The estimated force showed differences among stages of Parkinson disease, observing a decrease of the estimated force for the advanced stages of this illness.

  9. Quantifying the risk of extreme aviation accidents

    NASA Astrophysics Data System (ADS)

    Das, Kumer Pial; Dey, Asim Kumer

    2016-12-01

    Air travel is considered a safe means of transportation. But when aviation accidents do occur they often result in fatalities. Fortunately, the most extreme accidents occur rarely. However, 2014 was the deadliest year in the past decade causing 111 plane crashes, and among them worst four crashes cause 298, 239, 162 and 116 deaths. In this study, we want to assess the risk of the catastrophic aviation accidents by studying historical aviation accidents. Applying a generalized Pareto model we predict the maximum fatalities from an aviation accident in future. The fitted model is compared with some of its competitive models. The uncertainty in the inferences are quantified using simulated aviation accident series, generated by bootstrap resampling and Monte Carlo simulations.

  10. Model Performance of Water-Current Meters

    USGS Publications Warehouse

    Fulford, J.M.; ,

    2002-01-01

    The measurement of discharge in natural streams requires hydrographers to use accurate water-current meters that have consistent performance among meters of the same model. This paper presents the results of an investigation into the performance of four models of current meters - Price type-AA, Price pygmy, Marsh McBirney 2000 and Swoffer 2100. Tests for consistency and accuracy for six meters of each model are summarized. Variation of meter performance within a model is used as an indicator of consistency, and percent velocity error that is computed from a measured reference velocity is used as an indicator of meter accuracy. Velocities measured by each meter are also compared to the manufacturer's published or advertised accuracy limits. For the meters tested, the Price models werer found to be more accurate and consistent over the range of test velocities compared to the other models. The Marsh McBirney model usually measured within its accuracy specification. The Swoffer meters did not meet the stringent Swoffer accuracy limits for all the velocities tested.

  11. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  12. Performance model for grid-connected photovoltaic inverters.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyson, William Earl; Galbraith, Gary M.; King, David L.

    2007-09-01

    This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less

  13. Model for the separate collection of packaging waste in Portuguese low-performing recycling regions.

    PubMed

    Oliveira, V; Sousa, V; Vaz, J M; Dias-Ferreira, C

    2018-06-15

    Separate collection of packaging waste (glass; plastic/metals; paper/cardboard), is currently a widespread practice throughout Europe. It enables the recovery of good quality recyclable materials. However, separate collection performance are quite heterogeneous, with some countries reaching higher levels than others. In the present work, separate collection of packaging waste has been evaluated in a low-performance recycling region in Portugal in order to investigate which factors are most affecting the performance in bring-bank collection system. The variability of separate collection yields (kg per inhabitant per year) among 42 municipalities was scrutinized for the year 2015 against possible explanatory factors. A total of 14 possible explanatory factors were analysed, falling into two groups: socio-economic/demographic and waste collection service related. Regression models were built in an attempt to evaluate the individual effect of each factor on separate collection yields and predict changes on the collection yields by acting on those factors. The best model obtained is capable to explain 73% of the variation found in the separate collection yields. The model includes the following statistically significant indicators affecting the success of separate collection yields: i) inhabitants per bring-bank; ii) relative accessibility to bring-banks; iii) degree of urbanization; iv) number of school years attended; and v) area. The model presented in this work was developed specifically for the bring-bank system, has an explanatory power and quantifies the impact of each factor on separate collection yields. It can therefore be used as a support tool by local and regional waste management authorities in the definition of future strategies to increase collection of recyclables of good quality and to achieve national and regional targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Analytical Algorithms to Quantify the Uncertainty in Remaining Useful Life Prediction

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar; Saxena, Abhinav; Daigle, Matthew; Goebel, Kai

    2013-01-01

    This paper investigates the use of analytical algorithms to quantify the uncertainty in the remaining useful life (RUL) estimate of components used in aerospace applications. The prediction of RUL is affected by several sources of uncertainty and it is important to systematically quantify their combined effect by computing the uncertainty in the RUL prediction in order to aid risk assessment, risk mitigation, and decisionmaking. While sampling-based algorithms have been conventionally used for quantifying the uncertainty in RUL, analytical algorithms are computationally cheaper and sometimes, are better suited for online decision-making. While exact analytical algorithms are available only for certain special cases (for e.g., linear models with Gaussian variables), effective approximations can be made using the the first-order second moment method (FOSM), the first-order reliability method (FORM), and the inverse first-order reliability method (Inverse FORM). These methods can be used not only to calculate the entire probability distribution of RUL but also to obtain probability bounds on RUL. This paper explains these three methods in detail and illustrates them using the state-space model of a lithium-ion battery.

  15. In vitro quantification of the performance of model-based mono-planar and bi-planar fluoroscopy for 3D joint kinematics estimation.

    PubMed

    Tersi, Luca; Barré, Arnaud; Fantozzi, Silvia; Stagni, Rita

    2013-03-01

    Model-based mono-planar and bi-planar 3D fluoroscopy methods can quantify intact joints kinematics with performance/cost trade-off. The aim of this study was to compare the performances of mono- and bi-planar setups to a marker-based gold-standard, during dynamic phantom knee acquisitions. Absolute pose errors for in-plane parameters were lower than 0.6 mm or 0.6° for both mono- and bi-planar setups. Mono-planar setups resulted critical in quantifying the out-of-plane translation (error < 6.5 mm), and bi-planar in quantifying the rotation along bone longitudinal axis (error < 1.3°). These errors propagated to joint angles and translations differently depending on the alignment of the anatomical axes and the fluoroscopic reference frames. Internal-external rotation was the least accurate angle both with mono- (error < 4.4°) and bi-planar (error < 1.7°) setups, due to bone longitudinal symmetries. Results highlighted that accuracy for mono-planar in-plane pose parameters is comparable to bi-planar, but with halved computational costs, halved segmentation time and halved ionizing radiation dose. Bi-planar analysis better compensated for the out-of-plane uncertainty that is differently propagated to relative kinematics depending on the setup. To take its full benefits, the motion task to be investigated should be designed to maintain the joint inside the visible volume introducing constraints with respect to mono-planar analysis.

  16. ATR performance modeling concepts

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  17. MPD Thruster Performance Analytic Models

    NASA Technical Reports Server (NTRS)

    Gilland, James; Johnston, Geoffrey

    2003-01-01

    Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.

  18. MPD Thruster Performance Analytic Models

    NASA Technical Reports Server (NTRS)

    Gilland, James; Johnston, Geoffrey

    2007-01-01

    Magnetoplasmadynamic (MPD) thrusters are capable of accelerating quasi-neutral plasmas to high exhaust velocities using Megawatts (MW) of electric power. These characteristics make such devices worthy of consideration for demanding, far-term missions such as the human exploration of Mars or beyond. Assessment of MPD thrusters at the system and mission level is often difficult due to their status as ongoing experimental research topics rather than developed thrusters. However, in order to assess MPD thrusters utility in later missions, some adequate characterization of performance, or more exactly, projected performance, and system level definition are required for use in analyses. The most recent physical models of self-field MPD thrusters have been examined, assessed, and reconfigured for use by systems and mission analysts. The physical models allow for rational projections of thruster performance based on physical parameters that can be measured in the laboratory. The models and their implications for the design of future MPD thrusters are presented.

  19. Quantifying Water Stress Using Total Water Volumes and GRACE

    NASA Astrophysics Data System (ADS)

    Richey, A. S.; Famiglietti, J. S.; Druffel-Rodriguez, R.

    2011-12-01

    Water will follow oil as the next critical resource leading to unrest and uprisings globally. To better manage this threat, an improved understanding of the distribution of water stress is required today. This study builds upon previous efforts to characterize water stress by improving both the quantification of human water use and the definition of water availability. Current statistics on human water use are often outdated or inaccurately reported nationally, especially for groundwater. This study improves these estimates by defining human water use in two ways. First, we use NASA's Gravity Recovery and Climate Experiment (GRACE) to isolate the anthropogenic signal in water storage anomalies, which we equate to water use. Second, we quantify an ideal water demand by using average water requirements for the domestic, industrial, and agricultural water use sectors. Water availability has traditionally been limited to "renewable" water, which ignores large, stored water sources that humans use. We compare water stress estimates derived using either renewable water or the total volume of water globally. We use the best-available data to quantify total aquifer and surface water volumes, as compared to groundwater recharge and surface water runoff from land-surface models. The work presented here should provide a more realistic image of water stress by explicitly quantifying groundwater, defining water availability as total water supply, and using GRACE to more accurately quantify water use.

  20. An ecohydrological model to quantify the risk of drought-induced forest mortality events across climate regimes

    NASA Astrophysics Data System (ADS)

    Parolari, A.; Katul, G. G.; Porporato, A. M.

    2013-12-01

    Regional scale drought-induced forest mortality events are projected to become more frequent under future climates due to changes in rainfall patterns. However, the ability to predict the conditions under which such events occur is currently lacking. To quantify and understand the underlying causes of drought-induced forest mortality, we propose a stochastic ecohydrological model that explicitly couples tree water and carbon use strategies with climate characteristics, such as the frequency and severity of drought. Using the model and results from a controlled drought experiment, we identify the soil, vegetation, and climate factors that underlie tree water and carbon deficits and, ultimately, the risk of drought-induced forest mortality. This mortality risk is then compared across the spectrum of anisohydric-isohydric stomatal control strategies and a range of rainfall regimes. These results suggest certain soil-plant combinations may maximize the survivable drought length in a given climate. Finally, we discuss how this approach can be expanded to estimate the effect of anticipated climate change on drought-induced forest mortality and associated consequences for forest water and carbon balances.

  1. Quantifying and Validating Rapid Floodplain Geomorphic Evolution, a Monitoring and Modelling Case Study

    NASA Astrophysics Data System (ADS)

    Scott, R.; Entwistle, N. S.

    2017-12-01

    Gravel bed rivers and their associated wider systems present an ideal subject for development and improvement of rapid monitoring tools, with features dynamic enough to evolve within relatively short-term timescales. For detecting and quantifying topographical evolution, UAV based remote sensing has manifested as a reliable, low cost, and accurate means of topographic data collection. Here we present some validated methodologies for detection of geomorphic change at resolutions down to 0.05 m, building on the work of Wheaton et al. (2009) and Milan et al. (2007), to generate mesh based and pointcloud comparison data to produce a reliable picture of topographic evolution. Results are presented for the River Glen, Northumberland, UK. Recent channel avulsion and floodplain interaction, resulting in damage to flood defence structures make this site a particularly suitable case for application of geomorphic change detection methods, with the UAV platform at its centre. We compare multi-temporal, high-resolution point clouds derived from SfM processing, cross referenced with aerial LiDAR data, over a 1.5 km reach of the watercourse. Changes detected included bank erosion, bar and splay deposition, vegetation stripping and incipient channel avulsion. Utilisation of the topographic data for numerical modelling, carried out using CAESAR-Lisflood predicted the avulsion of the main channel, resulting in erosion of and potentially complete circumvention of original channel and flood levees. A subsequent UAV survey highlighted topographic change and reconfiguration of the local sedimentary conveyor as we predicted with preliminary modelling. The combined monitoring and modelling approach has allowed probable future geomorphic configurations to be predicted permitting more informed implementation of channel and floodplain management strategies.

  2. A comparison of resampling schemes for estimating model observer performance with small ensembles

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Jha, Abhinav K.; Ghaly, Michael; Frey, Eric C.

    2017-09-01

    In objective assessment of image quality, an ensemble of images is used to compute the 1st and 2nd order statistics of the data. Often, only a finite number of images is available, leading to the issue of statistical variability in numerical observer performance. Resampling-based strategies can help overcome this issue. In this paper, we compared different combinations of resampling schemes (the leave-one-out (LOO) and the half-train/half-test (HT/HT)) and model observers (the conventional channelized Hotelling observer (CHO), channelized linear discriminant (CLD) and channelized quadratic discriminant). Observer performance was quantified by the area under the ROC curve (AUC). For a binary classification task and for each observer, the AUC value for an ensemble size of 2000 samples per class served as a gold standard for that observer. Results indicated that each observer yielded a different performance depending on the ensemble size and the resampling scheme. For a small ensemble size, the combination [CHO, HT/HT] had more accurate rankings than the combination [CHO, LOO]. Using the LOO scheme, the CLD and CHO had similar performance for large ensembles. However, the CLD outperformed the CHO and gave more accurate rankings for smaller ensembles. As the ensemble size decreased, the performance of the [CHO, LOO] combination seriously deteriorated as opposed to the [CLD, LOO] combination. Thus, it might be desirable to use the CLD with the LOO scheme when smaller ensemble size is available.

  3. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  4. A probabilistic approach to quantifying hydrologic thresholds regulating migration of adult Atlantic salmon into spawning streams

    NASA Astrophysics Data System (ADS)

    Lazzaro, G.; Soulsby, C.; Tetzlaff, D.; Botter, G.

    2017-03-01

    Atlantic salmon is an economically and ecologically important fish species, whose survival is dependent on successful spawning in headwater rivers. Streamflow dynamics often have a strong control on spawning because fish require sufficiently high discharges to move upriver and enter spawning streams. However, these streamflow effects are modulated by biological factors such as the number and the timing of returning fish in relation to the annual spawning window in the fall/winter. In this paper, we develop and apply a novel probabilistic approach to quantify these interactions using a parsimonious outflux-influx model linking the number of female salmon emigrating (i.e., outflux) and returning (i.e., influx) to a spawning stream in Scotland. The model explicitly accounts for the interannual variability of the hydrologic regime and the hydrological connectivity of spawning streams to main rivers. Model results are evaluated against a detailed long-term (40 years) hydroecological data set that includes annual fluxes of salmon, allowing us to explicitly assess the role of discharge variability. The satisfactory model results show quantitatively that hydrologic variability contributes to the observed dynamics of salmon returns, with a good correlation between the positive (negative) peaks in the immigration data set and the exceedance (nonexceedance) probability of a threshold flow (0.3 m3/s). Importantly, model performance deteriorates when the interannual variability of flow regime is disregarded. The analysis suggests that flow thresholds and hydrological connectivity for spawning return represent a quantifiable and predictable feature of salmon rivers, which may be helpful in decision making where flow regimes are altered by water abstractions.

  5. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE PAGES

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian; ...

    2015-02-25

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  6. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian

    The reliable simulation of gross primary productivity (GPP) at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn), a linear two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourlymore » and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001) than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.« less

  7. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  8. Quantifying discharge uncertainty from remotely sensed precipitation data products in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Weerasinghe, H.; Raoufi, R.; Yoon, Y.; Beighley, E., II; Alshawabkeh, A.

    2014-12-01

    Preterm birth is a serious health issue in the United States that contributes to over one-third of all infant deaths. Puerto Rico being one of the hot spots, preliminary research found that the high preterm birth rate can be associated with exposure to some contaminants in water used on daily basis. Puerto Rico has more than 200 contaminated sites including 16 active Superfund sites. Risk of exposure to contaminants is aggravated by unlined landfills lying over the karst regions, highly mobile and dynamic nature of the karst aquifers, and direct contact with surface water through sinkholes and springs. Much of the population in the island is getting water from natural springs or artesian wells that are connected with many of these potentially contaminated karst aquifers. Mobility of contaminants through surface water flows and reservoirs are largely known and are highly correlated with the variations in hydrologic events and conditions. In this study, we quantify the spatial and temporal distribution of Puerto Rico's surface water stores and fluxes to better understand potential impacts on the distribution of groundwater contamination. To quantify and characterize Puerto Rico's surface waters, hydrologic modeling, remote sensing and field measurements are combined. Streamflow measurements are available from 27 U.S. Geological Survey (USGS) gauging stations with drainage areas ranging from 2 to 510 km2. Hillslope River Routing (HRR) model is used to simulate hourly streamflow from watersheds larger than 1 km2 that discharge to ocean. HRR model simulates vertical water balance, lateral surface and subsurface runoff and river discharge. The model consists of 4418 sub-catchments with a mean model unit area (i.e., sub-catchment) of 1.8 km2. Using gauged streamflow measurements for validation, we first assess model results for simulated discharge using three precipitation products: TRMM-3B42 (3 hour temporal resolution, 0.25 degree spatial resolution); NWS stage

  9. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  10. Ku-Band rendezvous radar performance computer simulation model

    NASA Astrophysics Data System (ADS)

    Magnusson, H. G.; Goff, M. F.

    1984-06-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  11. Ku-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Magnusson, H. G.; Goff, M. F.

    1984-01-01

    All work performed on the Ku-band rendezvous radar performance computer simulation model program since the release of the preliminary final report is summarized. Developments on the program fall into three distinct categories: (1) modifications to the existing Ku-band radar tracking performance computer model; (2) the addition of a highly accurate, nonrealtime search and acquisition performance computer model to the total software package developed on this program; and (3) development of radar cross section (RCS) computation models for three additional satellites. All changes in the tracking model involved improvements in the automatic gain control (AGC) and the radar signal strength (RSS) computer models. Although the search and acquisition computer models were developed under the auspices of the Hughes Aircraft Company Ku-Band Integrated Radar and Communications Subsystem program office, they have been supplied to NASA as part of the Ku-band radar performance comuter model package. Their purpose is to predict Ku-band acquisition performance for specific satellite targets on specific missions. The RCS models were developed for three satellites: the Long Duration Exposure Facility (LDEF) spacecraft, the Solar Maximum Mission (SMM) spacecraft, and the Space Telescopes.

  12. Quantifying Cancer Risk from Radiation.

    PubMed

    Keil, Alexander P; Richardson, David B

    2017-12-06

    Complex statistical models fitted to data from studies of atomic bomb survivors are used to estimate the human health effects of ionizing radiation exposures. We describe and illustrate an approach to estimate population risks from ionizing radiation exposure that relaxes many assumptions about radiation-related mortality. The approach draws on developments in methods for causal inference. The results offer a different way to quantify radiation's effects and show that conventional estimates of the population burden of excess cancer at high radiation doses are driven strongly by projecting outside the range of current data. Summary results obtained using the proposed approach are similar in magnitude to those obtained using conventional methods, although estimates of radiation-related excess cancers differ for many age, sex, and dose groups. At low doses relevant to typical exposures, the strength of evidence in data is surprisingly weak. Statements regarding human health effects at low doses rely strongly on the use of modeling assumptions. © 2017 Society for Risk Analysis.

  13. Field swimming performance of bluegill sunfish, Lepomis macrochirus: implications for field activity cost estimates and laboratory measures of swimming performance.

    PubMed

    Cathcart, Kelsey; Shin, Seo Yim; Milton, Joanna; Ellerby, David

    2017-10-01

    Mobility is essential to the fitness of many animals, and the costs of locomotion can dominate daily energy budgets. Locomotor costs are determined by the physiological demands of sustaining mechanical performance, yet performance is poorly understood for most animals in the field, particularly aquatic organisms. We have used 3-D underwater videography to quantify the swimming trajectories and propulsive modes of bluegills sunfish ( Lepomis macrochirus , Rafinesque) in the field with high spatial (1-3 mm per pixel) and temporal (60 Hz frame rate) resolution. Although field swimming trajectories were variable and nonlinear in comparison to quasi steady-state swimming in recirculating flumes, they were much less unsteady than the volitional swimming behaviors that underlie existing predictive models of field swimming cost. Performance analyses suggested that speed and path curvature data could be used to derive reasonable estimates of locomotor cost that fit within measured capacities for sustainable activity. The distinct differences between field swimming behavior and performance measures obtained under steady-state laboratory conditions suggest that field observations are essential for informing approaches to quantifying locomotor performance in the laboratory.

  14. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics.

    PubMed

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, Enzo; Laufs, Helmut; Lacasa, Lucas

    2018-02-23

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic processes with and without correlations, chaotic processes) complemented by extensive numerical simulations for a range of processes which indicate that the methodology correctly distinguishes different complex dynamics and outperforms state of the art metrics in several cases. Subsequently, we apply this methodology to real-world problems emerging across several disciplines including cases in neurobiology, finance and climate science. We conclude that differences between the statistics of local maxima and local minima in time series are highly informative of the complex underlying dynamics and a graph-theoretic extraction procedure allows to use these features for statistical learning purposes.

  15. Calibration of PMIS pavement performance prediction models.

    DOT National Transportation Integrated Search

    2012-02-01

    Improve the accuracy of TxDOTs existing pavement performance prediction models through calibrating these models using actual field data obtained from the Pavement Management Information System (PMIS). : Ensure logical performance superiority patte...

  16. Performance Evaluation Model for Application Layer Firewalls.

    PubMed

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  17. Quantifying human behavior uncertainties in a coupled agent-based model for water resources management

    NASA Astrophysics Data System (ADS)

    Hyun, J. Y.; Yang, Y. C. E.; Tidwell, V. C.; Macknick, J.

    2017-12-01

    Modeling human behaviors and decisions in water resources management is a challenging issue due to its complexity and uncertain characteristics that affected by both internal (such as stakeholder's beliefs on any external information) and external factors (such as future policies and weather/climate forecast). Stakeholders' decision regarding how much water they need is usually not entirely rational in the real-world cases, so it is not quite suitable to model their decisions with a centralized (top-down) approach that assume everyone in a watershed follow the same order or pursue the same objective. Agent-based modeling (ABM) uses a decentralized approach (bottom-up) that allow each stakeholder to make his/her own decision based on his/her own objective and the belief of information acquired. In this study, we develop an ABM which incorporates the psychological human decision process by the theory of risk perception. The theory of risk perception quantifies human behaviors and decisions uncertainties using two sequential methodologies: the Bayesian Inference and the Cost-Loss Problem. The developed ABM is coupled with a regulation-based water system model: Riverware (RW) to evaluate different human decision uncertainties in water resources management. The San Juan River Basin in New Mexico (Figure 1) is chosen as a case study area, while we define 19 major irrigation districts as water use agents and their primary decision is to decide the irrigated area on an annual basis. This decision will be affected by three external factors: 1) upstream precipitation forecast (potential amount of water availability), 2) violation of the downstream minimum flow (required to support ecosystems), and 3) enforcement of a shortage sharing plan (a policy that is currently undertaken in the region for drought years). Three beliefs (as internal factors) that correspond to these three external factors will also be considered in the modeling framework. The objective of this study is

  18. Quantifying learning in biotracer studies.

    PubMed

    Brown, Christopher J; Brett, Michael T; Adame, Maria Fernanda; Stewart-Koster, Ben; Bunn, Stuart E

    2018-04-12

    Mixing models have become requisite tools for analyzing biotracer data, most commonly stable isotope ratios, to infer dietary contributions of multiple sources to a consumer. However, Bayesian mixing models will always return a result that defaults to their priors if the data poorly resolve the source contributions, and thus, their interpretation requires caution. We describe an application of information theory to quantify how much has been learned about a consumer's diet from new biotracer data. We apply the approach to two example data sets. We find that variation in the isotope ratios of sources limits the precision of estimates for the consumer's diet, even with a large number of consumer samples. Thus, the approach which we describe is a type of power analysis that uses a priori simulations to find an optimal sample size. Biotracer data are fundamentally limited in their ability to discriminate consumer diets. We suggest that other types of data, such as gut content analysis, must be used as prior information in model fitting, to improve model learning about the consumer's diet. Information theory may also be used to identify optimal sampling protocols in situations where sampling of consumers is limited due to expense or ethical concerns.

  19. Rotorcraft Performance Model (RPM) for use in AEDT.

    DOT National Transportation Integrated Search

    2015-11-01

    This report documents a rotorcraft performance model for use in the FAAs Aviation Environmental Design Tool. The new rotorcraft performance model is physics-based. This new model replaces the existing helicopter trajectory modeling methods in the ...

  20. Untangling Performance from Success

    NASA Astrophysics Data System (ADS)

    Yucesoy, Burcu; Barabasi, Albert-Laszlo

    Fame, popularity and celebrity status, frequently used tokens of success, are often loosely related to, or even divorced from professional performance. This dichotomy is partly rooted in the difficulty to distinguish performance, an individual measure that captures the actions of a performer, from success, a collective measure that captures a community's reactions to these actions. Yet, finding the relationship between the two measures is essential for all areas that aim to objectively reward excellence, from science to business. Here we quantify the relationship between performance and success by focusing on tennis, an individual sport where the two quantities can be independently measured. We show that a predictive model, relying only on a tennis player's performance in tournaments, can accurately predict an athlete's popularity, both during a player's active years and after retirement. Hence the model establishes a direct link between performance and momentary popularity. The agreement between the performance-driven and observed popularity suggests that in most areas of human achievement exceptional visibility may be rooted in detectable performance measures. This research was supported by Air Force Office of Scientific Research (AFOSR) under agreement FA9550-15-1-0077.

  1. Quantifying sources of elemental carbon over the Guanzhong Basin of China: A consistent network of measurements and WRF-Chem modeling.

    PubMed

    Li, Nan; He, Qingyang; Tie, Xuexi; Cao, Junji; Liu, Suixin; Wang, Qiyuan; Li, Guohui; Huang, Rujin; Zhang, Qiang

    2016-07-01

    We conducted a year-long WRF-Chem (Weather Research and Forecasting Chemical) model simulation of elemental carbon (EC) aerosol and compared the modeling results to the surface EC measurements in the Guanzhong (GZ) Basin of China. The main goals of this study were to quantify the individual contributions of different EC sources to EC pollution, and to find the major cause of the EC pollution in this region. The EC measurements were simultaneously conducted at 10 urban, rural, and background sites over the GZ Basin from May 2013 to April 2014, and provided a good base against which to evaluate model simulation. The model evaluation showed that the calculated annual mean EC concentration was 5.1 μgC m(-3), which was consistent with the observed value of 5.3 μgC m(-3). Moreover, the model result also reproduced the magnitude of measured EC in all seasons (regression slope = 0.98-1.03), as well as the spatial and temporal variations (r = 0.55-0.78). We conducted several sensitivity studies to quantify the individual contributions of EC sources to EC pollution. The sensitivity simulations showed that the local and outside sources contributed about 60% and 40% to the annual mean EC concentration, respectively, implying that local sources were the major EC pollution contributors in the GZ Basin. Among the local sources, residential sources contributed the most, followed by industry and transportation sources. A further analysis suggested that a 50% reduction of industry or transportation emissions only caused a 6% decrease in the annual mean EC concentration, while a 50% reduction of residential emissions reduced the winter surface EC concentration by up to 25%. In respect to the serious air pollution problems (including EC pollution) in the GZ Basin, our findings can provide an insightful view on local air pollution control strategies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Molecular visualizing and quantifying immune-associated peroxynitrite fluxes in phagocytes and mouse inflammation model.

    PubMed

    Li, Zan; Yan, Shi-Hai; Chen, Chen; Geng, Zhi-Rong; Chang, Jia-Yin; Chen, Chun-Xia; Huang, Bing-Huan; Wang, Zhi-Lin

    2017-04-15

    Reactions of peroxynitrite (ONOO - ) with biomolecules can lead to cytotoxic and cytoprotective events. Due to the difficulty of directly and unambiguously measuring its levels, most of the beneficial effects associated with ONOO - in vivo remain controversial or poorly characterized. Recently, optical imaging has served as a powerful noninvasive approach to studying ONOO - in living systems. However, ratiometric probes for ONOO - are currently lacking. Herein, we report the design, synthesis, and biological evaluation of F 482 , a novel fluorescence indicator that relies on ONOO - -induced diene oxidation. The remarkable sensitivity, selectivity, and photostability of F 482 enabled us to visualize basal ONOO - in immune-stimulated phagocyte cells and quantify its generation in phagosomes by high-throughput flow cytometry analysis. With the aid of in vivo ONOO - imaging in a mouse inflammation model assisted by F 482 , we envision that F 482 will find widespread applications in the study of the ONOO - biology associated with physiological and pathological processes in vitro and in vivo. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Talker-specificity and adaptation in quantifier interpretation

    PubMed Central

    Yildirim, Ilker; Degen, Judith; Tanenhaus, Michael K.; Jaeger, T. Florian

    2015-01-01

    Linguistic meaning has long been recognized to be highly context-dependent. Quantifiers like many and some provide a particularly clear example of context-dependence. For example, the interpretation of quantifiers requires listeners to determine the relevant domain and scale. We focus on another type of context-dependence that quantifiers share with other lexical items: talker variability. Different talkers might use quantifiers with different interpretations in mind. We used a web-based crowdsourcing paradigm to study participants’ expectations about the use of many and some based on recent exposure. We first established that the mapping of some and many onto quantities (candies in a bowl) is variable both within and between participants. We then examined whether and how listeners’ expectations about quantifier use adapts with exposure to talkers who use quantifiers in different ways. The results demonstrate that listeners can adapt to talker-specific biases in both how often and with what intended meaning many and some are used. PMID:26858511

  4. Shoulder Arthroscopy Simulator Training Improves Shoulder Arthroscopy Performance in a Cadaver Model

    PubMed Central

    Henn, R. Frank; Shah, Neel; Warner, Jon J.P.; Gomoll, Andreas H.

    2013-01-01

    Purpose The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaver model of shoulder arthroscopy. Methods Seventeen first year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and nine of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The two groups were compared with students t-tests, and change over time within groups was analyzed with paired t-tests. Results There were no observed differences between the two groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (p<0.05). Time to completion was significantly faster in the simulator group compared to controls at final evaluation (p<0.05). No difference was observed between the groups on the subjective scores at final evaluation (p=0.98). Conclusions Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaver model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. Clinical Relevance There may be a role for simulator training in shoulder arthroscopy education. PMID:23591380

  5. Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk (Final Report)

    EPA Science Inventory

    EPA announced the availability of the final report, Use of Physiologically Based Pharmacokinetic (PBPK) Models to Quantify the Impact of Human Age and Interindividual Differences in Physiology and Biochemistry Pertinent to Risk Final Report for Cooperative Agreement. Th...

  6. Quantifying and tuning entanglement for quantum systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing

    A 2D Ising model with transverse field on a triangular lattice is studied using exact diagonalization. The quantum entanglement of the system is quantified by the entanglement of formation. The ground state property of the system is studied and the quantified entanglement is shown to be closely related to the ground state wavefunction while the singularity in the entanglement as a function of the transverse field is a reasonable indicator of the quantum phase transition. In order to tune the entanglement, one can either include an impurity in the otherwise homogeneous system whose strength is tunable, or one can vary the external transverse field as a tuner. The latter kind of tuning involves complicated dynamical properties of the system. From the study of the dynamics on a comparatively smaller system, we provide ways to tune the entanglement without triggering any decoherence. The finite temperature effect is also discussed. Besides showing above physical results, the realization of the trace-minimization method in our system is provided; the scalability of such method to larger systems is argued.

  7. Toward quantifying the composition of soft tissues by spectral CT with Medipix3.

    PubMed

    Ronaldson, J Paul; Zainon, Rafidah; Scott, Nicola Jean Agnes; Gieseg, Steven Paul; Butler, Anthony P; Butler, Philip H; Anderson, Nigel G

    2012-11-01

    To determine the potential of spectral computed tomography (CT) with Medipix3 for quantifying fat, calcium, and iron in soft tissues within small animal models and surgical specimens of diseases such as fatty liver (metabolic syndrome) and unstable atherosclerosis. The spectroscopic method was applied to tomographic data acquired using a micro-CT system incorporating a Medipix3 detector array with silicon sensor layer and microfocus x-ray tube operating at 50 kVp. A 10 mm diameter perspex phantom containing a fat surrogate (sunflower oil) and aqueous solutions of ferric nitrate, calcium chloride, and iodine was imaged with multiple energy bins. The authors used the spectroscopic characteristics of the CT number to establish a basis for the decomposition of soft tissue components. The potential of the method of constrained least squares for quantifying different sets of materials was evaluated in terms of information entropy and degrees of freedom, with and without the use of a volume conservation constraint. The measurement performance was evaluated quantitatively using atheroma and mouse equivalent phantoms. Finally the decomposition method was assessed qualitatively using a euthanized mouse and an excised human atherosclerotic plaque. Spectral CT measurements of a phantom containing tissue surrogates confirmed the ability to distinguish these materials by the spectroscopic characteristics of their CT number. The assessment of performance potential in terms of information entropy and degrees of freedom indicated that certain sets of up to three materials could be decomposed by the method of constrained least squares. However, there was insufficient information within the data set to distinguish calcium from iron within soft tissues. The quantification of calcium concentration and fat mass fraction within atheroma and mouse equivalent phantoms by spectral CT correlated well with the nominal values (R(2) = 0.990 and R(2) = 0.985, respectively). In the euthanized

  8. COBRA ATD minefield detection model initial performance analysis

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.

  9. Modeling and dynamic monitoring of ecosystem performance in the Yukon River Basin

    USGS Publications Warehouse

    Wylie, Bruce K.; Zhang, L.; Ji, Lei; Tieszen, Larry L.; Bliss, N.B.

    2008-01-01

    Central Alaska is ecologically sensitive and experiencing stress in response to marked regional warming. Resource managers would benefit from an improved ability to monitor ecosystem processes in response to climate change, fire, insect damage, and management policies and to predict responses to future climate scenarios. We have developed a method for analyzing ecosystem performance as represented by the growing season integral of normalized difference vegetation index (NDVI), which is a measure of greenness that can be interpreted in terms of plant growth or photosynthetic activity (gross primary productivity). The approach illustrates the status and trends of ecosystem changes and separates the influences of climate and local site conditions from the influences of disturbances and land management.We emphasize the ability to quantify ecosystem processes, not simply changes in land cover, across the entire period of the remote sensing archive (Wylie and others, 2008). The method builds upon remotely sensed measures of vegetation greenness for each growing season. By itself, however, a time series of greenness often reflects annual climate variations in temperature and precipitation. Our method seeks to remove the influence of climate so that changes in underlying ecological conditions are identified and quantified. We define an "expected ecosystem performance" to represent the greenness response expected in a particular year given the climate of that year. We distinguish "performance anomalies" as cases where the ecosystem response is significantly different from the expected ecosystem performance. Maps of the performance anomalies (fig. 1) and trends in the anomalies give valuable information on the ecosystems for land managers and policy makers at a resolution of 1 km to 250 m.

  10. Feasibility of Quantifying Arterial Cerebral Blood Volume Using Multiphase Alternate Ascending/Descending Directional Navigation (ALADDIN).

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2016-01-01

    Arterial cerebral blood volume (aCBV) is associated with many physiologic and pathologic conditions. Recently, multiphase balanced steady state free precession (bSSFP) readout was introduced to measure labeled blood signals in the arterial compartment, based on the fact that signal difference between labeled and unlabeled blood decreases with the number of RF pulses that is affected by blood velocity. In this study, we evaluated the feasibility of a new 2D inter-slice bSSFP-based arterial spin labeling (ASL) technique termed, alternate ascending/descending directional navigation (ALADDIN), to quantify aCBV using multiphase acquisition in six healthy subjects. A new kinetic model considering bSSFP RF perturbations was proposed to describe the multiphase data and thus to quantify aCBV. Since the inter-slice time delay (TD) and gap affected the distribution of labeled blood spins in the arterial and tissue compartments, we performed the experiments with two TDs (0 and 500 ms) and two gaps (300% and 450% of slice thickness) to evaluate their roles in quantifying aCBV. Comparison studies using our technique and an existing method termed arterial volume using arterial spin tagging (AVAST) were also separately performed in five subjects. At 300% gap or 500-ms TD, significant tissue perfusion signals were demonstrated, while tissue perfusion signals were minimized and arterial signals were maximized at 450% gap and 0-ms TD. ALADDIN has an advantage of visualizing bi-directional flow effects (ascending/descending) in a single experiment. Labeling efficiency (α) of inter-slice blood flow effects could be measured in the superior sagittal sinus (SSS) (20.8±3.7%.) and was used for aCBV quantification. As a result of fitting to the proposed model, aCBV values in gray matter (1.4-2.3 mL/100 mL) were in good agreement with those from literature. Our technique showed high correlation with AVAST, especially when arterial signals were accentuated (i.e., when TD = 0 ms) (r = 0

  11. Emerging Patient-Driven Health Care Models: An Examination of Health Social Networks, Consumer Personalized Medicine and Quantified Self-Tracking

    PubMed Central

    Swan, Melanie

    2009-01-01

    A new class of patient-driven health care services is emerging to supplement and extend traditional health care delivery models and empower patient self-care. Patient-driven health care can be characterized as having an increased level of information flow, transparency, customization, collaboration and patient choice and responsibility-taking, as well as quantitative, predictive and preventive aspects. The potential exists to both improve traditional health care systems and expand the concept of health care though new services. This paper examines three categories of novel health services: health social networks, consumer personalized medicine and quantified self-tracking. PMID:19440396

  12. Nested modeling approach to quantify sediment transport pathways and temporal variability of barrier island evolution

    NASA Astrophysics Data System (ADS)

    Long, J. W.; Dalyander, S.; Sherwood, C. R.; Thompson, D. M.; Plant, N. G.

    2012-12-01

    The Chandeleur Islands, situated off the coast of Louisiana in the Gulf of Mexico, comprise a sand-starved barrier island system that has been disintegrating over the last decade. The persistent sediment transport in this area is predominantly directed alongshore but overwash and inundation during storm conditions has fragmented the island and reduced the subaerial extent by almost 75% since 2001. From 2010-2011 a sand berm was constructed along the Gulf side of the island adding 20 million cubic yards of sediment to this barrier island system. The redistribution of this sediment, particularly whether it remains in the active system and progrades the barrier island, has been evaluated using a series of numerical models and an extensive set of in situ and remote sensing observations. We have developed a coupled numerical modeling system capable of simulating morphologic evolution of the sand berm and barrier island using observations and predictions of regional and nearshore oceanographic processes. A nested approach provides large scale oceanographic information to force island evolution in a series of smaller grids, including two nearshore domains that are designed to simulate (1) the persistent alongshore sediment transport O(months-years) and (2) the overwash and breaching of the island/berm due to cross-shore forcing driven by winter cold fronts and tropical storms (O(hours-days)). The coupled model is evaluated using the observations of waves, water levels, currents, and topographic/morphologic change. Modeled processes are then used to identify the dominant sediment transport pathways and quantify the role of alongshore and cross-shore sediment transport in evolving the barrier island over a range of temporal scales.

  13. Performance of Lynch syndrome predictive models in quantifying the likelihood of germline mutations in patients with abnormal MLH1 immunoexpression.

    PubMed

    Cabreira, Verónica; Pinto, Carla; Pinheiro, Manuela; Lopes, Paula; Peixoto, Ana; Santos, Catarina; Veiga, Isabel; Rocha, Patrícia; Pinto, Pedro; Henrique, Rui; Teixeira, Manuel R

    2017-01-01

    Lynch syndrome (LS) accounts for up to 4 % of all colorectal cancers (CRC). Detection of a pathogenic germline mutation in one of the mismatch repair genes is the definitive criterion for LS diagnosis, but it is time-consuming and expensive. Immunohistochemistry is the most sensitive prescreening test and its predictive value is very high for loss of expression of MSH2, MSH6, and (isolated) PMS2, but not for MLH1. We evaluated if LS predictive models have a role to improve the molecular testing algorithm in this specific setting by studying 38 individuals referred for molecular testing and who were subsequently shown to have loss of MLH1 immunoexpression in their tumors. For each proband we calculated a risk score, which represents the probability that the patient with CRC carries a pathogenic MLH1 germline mutation, using the PREMM 1,2,6 and MMRpro predictive models. Of the 38 individuals, 18.4 % had a pathogenic MLH1 germline mutation. MMRpro performed better for the purpose of this study, presenting a AUC of 0.83 (95 % CI 0.67-0.9; P < 0.001) compared with a AUC of 0.68 (95 % CI 0.51-0.82, P = 0.09) for PREMM 1,2,6 . Considering a threshold of 5 %, MMRpro would eliminate unnecessary germline mutation analysis in a significant proportion of cases while keeping very high sensitivity. We conclude that MMRpro is useful to correctly predict who should be screened for a germline MLH1 gene mutation and propose an algorithm to improve the cost-effectiveness of LS diagnosis.

  14. A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables. Part II - Validation and localization analysis

    NASA Astrophysics Data System (ADS)

    Das, Arghya; Tengattini, Alessandro; Nguyen, Giang D.; Viggiani, Gioacchino; Hall, Stephen A.; Einav, Itai

    2014-10-01

    We study the mechanical failure of cemented granular materials (e.g., sandstones) using a constitutive model based on breakage mechanics for grain crushing and damage mechanics for cement fracture. The theoretical aspects of this model are presented in Part I: Tengattini et al. (2014), A thermomechanical constitutive model for cemented granular materials with quantifiable internal variables, Part I - Theory (Journal of the Mechanics and Physics of Solids, 10.1016/j.jmps.2014.05.021). In this Part II we investigate the constitutive and structural responses of cemented granular materials through analyses of Boundary Value Problems (BVPs). The multiple failure mechanisms captured by the proposed model enable the behavior of cemented granular rocks to be well reproduced for a wide range of confining pressures. Furthermore, through comparison of the model predictions and experimental data, the micromechanical basis of the model provides improved understanding of failure mechanisms of cemented granular materials. In particular, we show that grain crushing is the predominant inelastic deformation mechanism under high pressures while cement failure is the relevant mechanism at low pressures. Over an intermediate pressure regime a mixed mode of failure mechanisms is observed. Furthermore, the micromechanical roots of the model allow the effects on localized deformation modes of various initial microstructures to be studied. The results obtained from both the constitutive responses and BVP solutions indicate that the proposed approach and model provide a promising basis for future theoretical studies on cemented granular materials.

  15. Interharmonic modulation products as a means to quantify nonlinear D-region interactions

    NASA Astrophysics Data System (ADS)

    Moore, Robert

    Experimental observations performed during dual beam ionospheric HF heating experiments at the High frequency Active Auroral Research Program (HAARP) HF transmitter in Gakona, Alaska are used to quantify the relative importance of specific nonlinear interactions that occur within the D region ionosphere. During these experiments, HAARP broadcast two amplitude modulated HF beams whose center frequencies were separated by less than 20 kHz. One beam was sinusoidally modulated at 500 Hz while the second beam was sinusoidally modulated using a 1-7 kHz linear frequency-time chirp. ELF/VLF observations performed at two different locations (3 and 98 km from HAARP) provide clear evidence of strong interactions between all field components of the two HF beams in the form of low and high order interharmonic modulation products. From a theoretical standpoint, the observed interharmonic modulation products could be produced by several different nonlinearities. The two primary nonlinearities take the form of wave-medium interactions (i.e., cross modulation), wherein the ionospheric conductivity modulation produced by one signal crosses onto the other signal via collision frequency modification, and wave-wave interactions, wherein the conduction current associated with one wave mixes with the electric field of the other wave to produce electron temperature oscillations. We are able to separate and quantify these two different nonlinearities, and we conclude that the wave-wave interactions dominate the wave-medium interactions by a factor of two. These results are of great importance for the modeling of transioinospheric radio wave propagation, in that both the wave-wave and the wave-medium interactions could be responsible for a significant amount of anomalous absorption.

  16. Risk prediction models of breast cancer: a systematic review of model performances.

    PubMed

    Anothaisintawee, Thunyarat; Teerawattananon, Yot; Wiratkapun, Chollathip; Kasamesup, Vijj; Thakkinstian, Ammarin

    2012-05-01

    The number of risk prediction models has been increasingly developed, for estimating about breast cancer in individual women. However, those model performances are questionable. We therefore have conducted a study with the aim to systematically review previous risk prediction models. The results from this review help to identify the most reliable model and indicate the strengths and weaknesses of each model for guiding future model development. We searched MEDLINE (PubMed) from 1949 and EMBASE (Ovid) from 1974 until October 2010. Observational studies which constructed models using regression methods were selected. Information about model development and performance were extracted. Twenty-five out of 453 studies were eligible. Of these, 18 developed prediction models and 7 validated existing prediction models. Up to 13 variables were included in the models and sample sizes for each study ranged from 550 to 2,404,636. Internal validation was performed in four models, while five models had external validation. Gail and Rosner and Colditz models were the significant models which were subsequently modified by other scholars. Calibration performance of most models was fair to good (expected/observe ratio: 0.87-1.12), but discriminatory accuracy was poor to fair both in internal validation (concordance statistics: 0.53-0.66) and in external validation (concordance statistics: 0.56-0.63). Most models yielded relatively poor discrimination in both internal and external validation. This poor discriminatory accuracy of existing models might be because of a lack of knowledge about risk factors, heterogeneous subtypes of breast cancer, and different distributions of risk factors across populations. In addition the concordance statistic itself is insensitive to measure the improvement of discrimination. Therefore, the new method such as net reclassification index should be considered to evaluate the improvement of the performance of a new develop model.

  17. Quantifying the Stable Boundary Layer Structure and Evolution during T-REX 2006

    DTIC Science & Technology

    2014-09-30

    integrating surface observations, data from in-situ measurements, and a nested numerical model with two related topics was conducted in this project. the WRF ...as well as quantify differences at a fine scale model output using the different turbulent mixing/diffusion options in the WRF -ARW model; and (2... WRF model planetary boundary layer schemes were also conducted to study a downslope windstorm and rotors in Las Vegas valley. Two events (March 20

  18. Structural equation model analysis for the evaluation of overall driving performance: A driving simulator study focusing on driver distraction.

    PubMed

    Papantoniou, Panagiotis

    2018-04-03

    The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving

  19. Quantifying the Uncertainty in Streamflow Predictions Using Swat for Brazos-Colorado Coastal Watershed, Texas

    NASA Astrophysics Data System (ADS)

    Mandal, D.; Bhatia, N.; Srivastav, R. K.

    2016-12-01

    Soil Water Assessment Tool (SWAT) is one of the most comprehensive hydrologic models to simulate streamflow for a watershed. The two major inputs for a SWAT model are: (i) Digital Elevation Models (DEM), and (ii) Land Use and Land Cover Maps (LULC). This study aims to quantify the uncertainty in streamflow predictions using SWAT for San Bernard River in Brazos-Colorado coastal watershed, Texas, by incorporating the respective datasets from different sources: (i) DEM data will be obtained from ASTER GDEM V2, GMTED2010, NHD DEM, and SRTM DEM datasets with ranging resolution from 1/3 arc-second to 30 arc-second, and (ii) LULC data will be obtained from GLCC V2, MRLC NLCD2011, NOAA's C-CAP, USGS GAP, and TCEQ databases. Weather variables (Precipitation and Max-Min Temperature at daily scale) will be obtained from National Climatic Data Centre (NCDC) and SWAT in-built STASGO tool will be used to obtain the soil maps. The SWAT model will be calibrated using SWAT-CUP SUFI-2 approach and its performance will be evaluated using the statistical indices of Nash-Sutcliffe efficiency (NSE), ratio of Root-Mean-Square-Error to standard deviation of observed streamflow (RSR), and Percent-Bias Error (PBIAS). The study will help understand the performance of SWAT model with varying data sources and eventually aid the regional state water boards in planning, designing, and managing hydrologic systems.

  20. The Audience Performs: A Phenomenological Model for Criticism of Oral Interpretation Performance.

    ERIC Educational Resources Information Center

    Langellier, Kristin M.

    Richard Lanigan's phenomenology of human communication is applicable to the development of a model for critiquing oral interpretation performance. This phenomenological model takes conscious experience of the relationship of a person and the lived-world as its data base, and assumes a phenomenology of performance which creates text in the triadic…

  1. Quantifying uncertainty in morphologically-derived bedload transport rates for large braided rivers: insights from high-resolution, high-frequency digital elevation model differencing

    NASA Astrophysics Data System (ADS)

    Brasington, J.; Hicks, M.; Wheaton, J. M.; Williams, R. D.; Vericat, D.

    2013-12-01

    Repeat surveys of channel morphology provide a means to quantify fluvial sediment storage and enable inferences about changes in long-term sediment supply, watershed delivery and bed level adjustment; information vital to support effective river and land management. Over shorter time-scales, direct differencing of fluvial terrain models may also offer a route to predict reach-averaged sediment transport rates and quantify the patterns of channel morphodynamics and the processes that force them. Recent and rapid advances in geomatics have facilitated these goals by enabling the acquisition of topographic data at spatial resolutions and precisions suitable for characterising river morphology at the scale of individual grains over multi-kilometre reaches. Despite improvements in topographic surveying, inverting the terms of the sediment budget to derive estimates of sediment transport and link these to morphodynamic processes is, nonetheless, often confounded by limited knowledge of either the sediment supply or efflux across a boundary of the control volume, or unobserved cut-and-fill taking place between surveys. This latter problem is particularly poorly constrained, as field logistics frequently preclude surveys at a temporal frequency sufficient to capture changes in sediment storage associated with each competent event, let alone changes during individual floods. In this paper, we attempt to quantify the principal sources of uncertainty in morphologically-derived bedload transport rates for the large, labile, gravel-bed braided Rees River which drains the Southern Alps of NZ. During the austral summer of 2009-10, a unique timeseries of 10 high quality DEMs was derived for a 3 x 0.7 km reach of the Rees, using a combination of mobile terrestrial laser scanning, aDcp soundings and aerial image analysis. Complementary measurements of the forcing flood discharges and estimates of event-based particle step lengths were also acquired during the field campaign

  2. Quantifying Modern Recharge to the Nubian Sandstone Aquifer System: Inferences from GRACE and Land Surface Models

    NASA Astrophysics Data System (ADS)

    Mohamed, A.; Sultan, M.; Ahmed, M.; Yan, E.

    2014-12-01

    The Nubian Sandstone Aquifer System (NSAS) is shared by Egypt, Libya, Chad and Sudanand is one of the largest (area: ~ 2 × 106 km2) groundwater systems in the world. Despite its importance to the population of these countries, major hydrological parameters such as modern recharge and extraction rates remain poorly investigated given: (1) the large extent of the NSAS, (2) the absence of comprehensive monitoring networks, (3) the general inaccessibility of many of the NSAS regions, (4) difficulties in collecting background information, largely included in unpublished governmental reports, and (5) limited local funding to support the construction of monitoring networks and/or collection of field and background datasets. Data from monthly Gravity Recovery and Climate Experiment (GRACE) gravity solutions were processed (Gaussian smoothed: 100 km; rescaled) and used to quantify the modern recharge to the NSAS during the period from January 2003 to December 2012. To isolate the groundwater component in GRACE data, the soil moisture and river channel storages were removed using the outputs from the most recent Community Land Model version 4.5 (CLM4.5). GRACE-derived recharge calculations were performed over the southern NSAS outcrops (area: 835 × 103 km2) in Sudan and Chad that receive average annual precipitation of 65 km3 (77.5 mm). GRACE-derived recharge rates were estimated at 2.79 ± 0.98 km3/yr (3.34 ± 1.17 mm/yr). If we take into account the total annual extraction rates (~ 0.4 km3; CEDARE, 2002) from Chad and Sudan the average annual recharge rate for the NSAS could reach up to ~ 3.20 ± 1.18 km3/yr (3.84 ± 1.42 mm/yr). Our recharge rates estimates are similar to those calculated using (1) groundwater flow modelling in the Central Sudan Rift Basins (4-8 mm/yr; Abdalla, 2008), (2) WaterGAP global scale groundwater recharge model (< 5 mm/yr, Döll and Fiedler, 2008), and (3) chloride tracer in Sudan (3.05 mm/yr; Edmunds et al. 1988). Given the available global

  3. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography.

    PubMed

    Terrill, Philip I; Edwards, Bradley A; Nemati, Shamim; Butler, James P; Owens, Robert L; Eckert, Danny J; White, David P; Malhotra, Atul; Wellman, Andrew; Sands, Scott A

    2015-02-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±sem change in loop gain (ΔLG) -0.23±0.08, p=0.02) and acetazolamide (n=11; ΔLG -0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. Copyright ©ERS 2015.

  4. Quantifying the ventilatory control contribution to sleep apnoea using polysomnography

    PubMed Central

    Terrill, Philip I.; Edwards, Bradley A.; Nemati, Shamim; Butler, James P.; Owens, Robert L.; Eckert, Danny J.; White, David P.; Malhotra, Atul; Wellman, Andrew; Sands, Scott A.

    2015-01-01

    Elevated loop gain, consequent to hypersensitive ventilatory control, is a primary nonanatomical cause of obstructive sleep apnoea (OSA) but it is not possible to quantify this in the clinic. Here we provide a novel method to estimate loop gain in OSA patients using routine clinical polysomnography alone. We use the concept that spontaneous ventilatory fluctuations due to apnoeas/hypopnoeas (disturbance) result in opposing changes in ventilatory drive (response) as determined by loop gain (response/disturbance). Fitting a simple ventilatory control model (including chemical and arousal contributions to ventilatory drive) to the ventilatory pattern of OSA reveals the underlying loop gain. Following mathematical-model validation, we critically tested our method in patients with OSA by comparison with a standard (continuous positive airway pressure (CPAP) drop method), and by assessing its ability to detect the known reduction in loop gain with oxygen and acetazolamide. Our method quantified loop gain from baseline polysomnography (correlation versus CPAP-estimated loop gain: n=28; r=0.63, p<0.001), detected the known reduction in loop gain with oxygen (n=11; mean±SEM change in loop gain (ΔLG) −0.23±0.08, p=0.02) and acetazolamide (n=11; ΔLG −0.20±0.06, p=0.005), and predicted the OSA response to loop gain-lowering therapy. We validated a means to quantify the ventilatory control contribution to OSA pathogenesis using clinical polysomnography, enabling identification of likely responders to therapies targeting ventilatory control. PMID:25323235

  5. Direct Animal Calorimetry, the Underused Gold Standard for Quantifying the Fire of Life*

    PubMed Central

    Kaiyala, Karl J.; Ramsay, Douglas S.

    2012-01-01

    Direct animal calorimetry, the gold standard method for quantifying animal heat production (HP), has been largely supplanted by respirometric indirect calorimetry owing to the relative ease and ready commercial availability of the latter technique. Direct calorimetry, however, can accurately quantify HP and thus metabolic rate (MR) in both metabolically normal and abnormal states, whereas respirometric indirect calorimetry relies on important assumptions that apparently have never been tested in animals with genetic or pharmacologically-induced alterations that dysregulate metabolic fuel partitioning and storage so as to promote obesity and/or diabetes. Contemporary obesity and diabetes research relies heavily on metabolically abnormal animals. Recent data implicating individual and group variation in the gut microbiome in obesity and diabetes raise important questions about transforming aerobic gas exchange into HP because 99% of gut bacteria are anaerobic and they outnumber eukaryotic cells in the body by ~10-fold. Recent credible work in non-standard laboratory animals documents substantial errors in respirometry-based estimates of HP. Accordingly, it seems obvious that new research employing simultaneous direct and indirect calorimetry (total calorimetry) will be essential to validate respirometric MR phenotyping in existing and future pharmacological and genetic models of obesity and diabetes. We also detail the use of total calorimetry with simultaneous core temperature assessment as a model for studying homeostatic control in a variety of experimental situations, including acute and chronic drug administration. Finally, we offer some tips on performing direct calorimetry, both singly and in combination with indirect calorimetry and core temperature assessment. PMID:20427023

  6. Palm: Easing the Burden of Analytical Performance Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tallent, Nathan R.; Hoisie, Adolfy

    2014-06-01

    Analytical (predictive) application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult because they must be both accurate and concise. To ease the burden of performance modeling, we developed Palm, a modeling tool that combines top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. To express insight, Palm defines a source code modeling annotation language. By coordinating models and source code, Palm's models are `first-class' and reproducible. Unlike prior work, Palm formally links models, functions, and measurements. As a result, Palm (a) uses functions to either abstract or express complexitymore » (b) generates hierarchical models (representing an application's static and dynamic structure); and (c) automatically incorporates measurements to focus attention, represent constant behavior, and validate models. We discuss generating models for three different applications.« less

  7. Quantified choice of root-mean-square errors of approximation for evaluation and power analysis of small differences between structural equation models.

    PubMed

    Li, Libo; Bentler, Peter M

    2011-06-01

    MacCallum, Browne, and Cai (2006) proposed a new framework for evaluation and power analysis of small differences between nested structural equation models (SEMs). In their framework, the null and alternative hypotheses for testing a small difference in fit and its related power analyses were defined by some chosen root-mean-square error of approximation (RMSEA) pairs. In this article, we develop a new method that quantifies those chosen RMSEA pairs and allows a quantitative comparison of them. Our method proposes the use of single RMSEA values to replace the choice of RMSEA pairs for model comparison and power analysis, thus avoiding the differential meaning of the chosen RMSEA pairs inherent in the approach of MacCallum et al. (2006). With this choice, the conventional cutoff values in model overall evaluation can directly be transferred and applied to the evaluation and power analysis of model differences. © 2011 American Psychological Association

  8. Bayesian methods to determine performance differences and to quantify variability among centers in multi-center trials: the IHAST trial.

    PubMed

    Bayman, Emine O; Chaloner, Kathryn M; Hindman, Bradley J; Todd, Michael M

    2013-01-16

    To quantify the variability among centers and to identify centers whose performance are potentially outside of normal variability in the primary outcome and to propose a guideline that they are outliers. Novel statistical methodology using a Bayesian hierarchical model is used. Bayesian methods for estimation and outlier detection are applied assuming an additive random center effect on the log odds of response: centers are similar but different (exchangeable). The Intraoperative Hypothermia for Aneurysm Surgery Trial (IHAST) is used as an example. Analyses were adjusted for treatment, age, gender, aneurysm location, World Federation of Neurological Surgeons scale, Fisher score and baseline NIH stroke scale scores. Adjustments for differences in center characteristics were also examined. Graphical and numerical summaries of the between-center standard deviation (sd) and variability, as well as the identification of potential outliers are implemented. In the IHAST, the center-to-center variation in the log odds of favorable outcome at each center is consistent with a normal distribution with posterior sd of 0.538 (95% credible interval: 0.397 to 0.726) after adjusting for the effects of important covariates. Outcome differences among centers show no outlying centers. Four potential outlying centers were identified but did not meet the proposed guideline for declaring them as outlying. Center characteristics (number of subjects enrolled from the center, geographical location, learning over time, nitrous oxide, and temporary clipping use) did not predict outcome, but subject and disease characteristics did. Bayesian hierarchical methods allow for determination of whether outcomes from a specific center differ from others and whether specific clinical practices predict outcome, even when some centers/subgroups have relatively small sample sizes. In the IHAST no outlying centers were found. The estimated variability between centers was moderately large.

  9. Quantifying Key Climate Parameter Uncertainties Using an Earth System Model with a Dynamic 3D Ocean

    NASA Astrophysics Data System (ADS)

    Olson, R.; Sriver, R. L.; Goes, M. P.; Urban, N.; Matthews, D.; Haran, M.; Keller, K.

    2011-12-01

    Climate projections hinge critically on uncertain climate model parameters such as climate sensitivity, vertical ocean diffusivity and anthropogenic sulfate aerosol forcings. Climate sensitivity is defined as the equilibrium global mean temperature response to a doubling of atmospheric CO2 concentrations. Vertical ocean diffusivity parameterizes sub-grid scale ocean vertical mixing processes. These parameters are typically estimated using Intermediate Complexity Earth System Models (EMICs) that lack a full 3D representation of the oceans, thereby neglecting the effects of mixing on ocean dynamics and meridional overturning. We improve on these studies by employing an EMIC with a dynamic 3D ocean model to estimate these parameters. We carry out historical climate simulations with the University of Victoria Earth System Climate Model (UVic ESCM) varying parameters that affect climate sensitivity, vertical ocean mixing, and effects of anthropogenic sulfate aerosols. We use a Bayesian approach whereby the likelihood of each parameter combination depends on how well the model simulates surface air temperature and upper ocean heat content. We use a Gaussian process emulator to interpolate the model output to an arbitrary parameter setting. We use Markov Chain Monte Carlo method to estimate the posterior probability distribution function (pdf) of these parameters. We explore the sensitivity of the results to prior assumptions about the parameters. In addition, we estimate the relative skill of different observations to constrain the parameters. We quantify the uncertainty in parameter estimates stemming from climate variability, model and observational errors. We explore the sensitivity of key decision-relevant climate projections to these parameters. We find that climate sensitivity and vertical ocean diffusivity estimates are consistent with previously published results. The climate sensitivity pdf is strongly affected by the prior assumptions, and by the scaling

  10. Quantifying Complexity in Quantum Phase Transitions via Mutual Information Complex Networks

    NASA Astrophysics Data System (ADS)

    Valdez, Marc Andrew; Jaschke, Daniel; Vargas, David L.; Carr, Lincoln D.

    2017-12-01

    We quantify the emergent complexity of quantum states near quantum critical points on regular 1D lattices, via complex network measures based on quantum mutual information as the adjacency matrix, in direct analogy to quantifying the complexity of electroencephalogram or functional magnetic resonance imaging measurements of the brain. Using matrix product state methods, we show that network density, clustering, disparity, and Pearson's correlation obtain the critical point for both quantum Ising and Bose-Hubbard models to a high degree of accuracy in finite-size scaling for three classes of quantum phase transitions, Z2, mean field superfluid to Mott insulator, and a Berzinskii-Kosterlitz-Thouless crossover.

  11. Quantifying High Frequency Skill and Relative Cost of Two Physics Packages for an Operational Spectral Wave Model

    NASA Astrophysics Data System (ADS)

    Edwards, K. L.; Siqueira, S. A.; Rogers, W. E.

    2017-12-01

    Simulating WAves Nearshore (SWAN) is one of the two phase-averaged spectral models used by the U.S. Navy for forecasting waves. The source term package available during SWAN's first validation for operational use in 2002 is referred to here as "SWAN-ST1". Since then, improvements to the source term package were implemented (SWAN-ST6). One outcome of the new package is better skill within the higher frequencies of the spectrum, where SWAN-ST1 gives non-physical response to changes in wind speed and non-physical response to presence of swell in the lower frequencies. However, SWAN-ST6 is typically 25% more expensive computationally than SWAN-ST1, so it is worthwhile to estimate its benefit to operational cases, where computation time is a primary concern. To validate SWAN-ST6 and quantify its improvement over SWAN-ST1, we apply the model to two geographic regions—the Northern Gulf of Mexico (NGoM) and Southern California (SoCal) coasts. The NGoM case is typified by smaller fetch and smaller swell contribution. We analyze the NGoM case for a year (2010 July-2011 Aug.); and the SoCal application for summer (2016 July-Aug.) and winter (2017 Jan.-Feb.) seasons. We compare results to available buoy data. The wave parameter mean square slope (MSS) is included in this study, since it is sensitive to accuracy (or lack thereof) at higher frequencies, as it is weighted by frequency to the fourth power. Model-data comparisons for the NGoM case show little benefit to using SWAN-ST6; both models highly correlate with observations, a result attributable to a) the lack of swell and b) the fact that the shorter dominant periods in this basin imply that available buoys typically measure very little of the spectral tail (e.g. 2 to 4 times the peak frequency). Model-data comparisons for the SoCal test case show that SWAN-ST6 performs at least as well but most often better than SWAN-ST1. The improvement is almost negligible for the lower order moments, significant wave height and

  12. Retrieving quantifiable social media data from human sensor networks for disaster modeling and crisis mapping

    NASA Astrophysics Data System (ADS)

    Aulov, Oleg

    This dissertation presents a novel approach that utilizes quantifiable social media data as a human aware, near real-time observing system, coupled with geophysical predictive models for improved response to disasters and extreme events. It shows that social media data has the potential to significantly improve disaster management beyond informing the public, and emphasizes the importance of different roles that social media can play in management, monitoring, modeling and mitigation of natural and human-caused extreme disasters. In the proposed approach Social Media users are viewed as "human sensors" that are "deployed" in the field, and their posts are considered to be "sensor observations", thus different social media outlets all together form a Human Sensor Network. We utilized the "human sensor" observations, as boundary value forcings, to show improved geophysical model forecasts of extreme disaster events when combined with other scientific data such as satellite observations and sensor measurements. Several recent extreme disasters are presented as use case scenarios. In the case of the Deepwater Horizon oil spill disaster of 2010 that devastated the Gulf of Mexico, the research demonstrates how social media data from Flickr can be used as a boundary forcing condition of GNOME oil spill plume forecast model, and results in an order of magnitude forecast improvement. In the case of Hurricane Sandy NY/NJ landfall impact of 2012, we demonstrate how the model forecasts, when combined with social media data in a single framework, can be used for near real-time forecast validation, damage assessment and disaster management. Owing to inherent uncertainties in the weather forecasts, the NOAA operational surge model only forecasts the worst-case scenario for flooding from any given hurricane. Geolocated and time-stamped Instagram photos and tweets allow near real-time assessment of the surge levels at different locations, which can validate model forecasts, give

  13. Estuarine modeling: Does a higher grid resolution improve model performance?

    EPA Science Inventory

    Ecological models are useful tools to explore cause effect relationships, test hypothesis and perform management scenarios. A mathematical model, the Gulf of Mexico Dissolved Oxygen Model (GoMDOM), has been developed and applied to the Louisiana continental shelf of the northern ...

  14. A Statistical Model-Based Decision Support System for Managing Summer Stream Temperatures with Quantified Confidence Analysis

    NASA Astrophysics Data System (ADS)

    Neumann, D. W.; Zagona, E. A.; Rajagopalan, B.

    2005-12-01

    Warm summer stream temperatures due to low flows and high air temperatures are a critical water quality problem in many western U.S. river basins because they impact threatened fish species' habitat. Releases from storage reservoirs and river diversions are typically driven by human demands such as irrigation, municipal and industrial uses and hydropower production. Historically, fish needs have not been formally incorporated in the operating procedures, which do not supply adequate flows for fish in the warmest, driest periods. One way to address this problem is for local and federal organizations to purchase water rights to be used to increase flows, hence decrease temperatures. A statistical model-predictive technique for efficient and effective use of a limited supply of fish water has been developed and incorporated in a Decision Support System (DSS) that can be used in an operations mode to effectively use water acquired to mitigate warm stream temperatures. The DSS is a rule-based system that uses the empirical, statistical predictive model to predict maximum daily stream temperatures based on flows that meet the non-fish operating criteria, and to compute reservoir releases of allocated fish water when predicted temperatures exceed fish habitat temperature targets with a user specified confidence of the temperature predictions. The empirical model is developed using a step-wise linear regression procedure to select significant predictors, and includes the computation of a prediction confidence interval to quantify the uncertainty of the prediction. The DSS also includes a strategy for managing a limited amount of water throughout the season based on degree-days in which temperatures are allowed to exceed the preferred targets for a limited number of days that can be tolerated by the fish. The DSS is demonstrated by an example application to the Truckee River near Reno, Nevada using historical flows from 1988 through 1994. In this case, the statistical model

  15. Integrated cosmological probes: concordance quantified

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicola, Andrina; Amara, Adam; Refregier, Alexandre, E-mail: andrina.nicola@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch

    2017-10-01

    Assessing the consistency of parameter constraints derived from different cosmological probes is an important way to test the validity of the underlying cosmological model. In an earlier work [1], we computed constraints on cosmological parameters for ΛCDM from an integrated analysis of CMB temperature anisotropies and CMB lensing from Planck, galaxy clustering and weak lensing from SDSS, weak lensing from DES SV as well as Type Ia supernovae and Hubble parameter measurements. In this work, we extend this analysis and quantify the concordance between the derived constraints and those derived by the Planck Collaboration as well as WMAP9, SPT andmore » ACT. As a measure for consistency, we use the Surprise statistic [2], which is based on the relative entropy. In the framework of a flat ΛCDM cosmological model, we find all data sets to be consistent with one another at a level of less than 1σ. We highlight that the relative entropy is sensitive to inconsistencies in the models that are used in different parts of the analysis. In particular, inconsistent assumptions for the neutrino mass break its invariance on the parameter choice. When consistent model assumptions are used, the data sets considered in this work all agree with each other and ΛCDM, without evidence for tensions.« less

  16. Validation of a national hydrological model

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Booker, D. J.; Cattoën, C.

    2016-10-01

    Nationwide predictions of flow time-series are valuable for development of policies relating to environmental flows, calculating reliability of supply to water users, or assessing risk of floods or droughts. This breadth of model utility is possible because various hydrological signatures can be derived from simulated flow time-series. However, producing national hydrological simulations can be challenging due to strong environmental diversity across catchments and a lack of data available to aid model parameterisation. A comprehensive and consistent suite of test procedures to quantify spatial and temporal patterns in performance across various parts of the hydrograph is described and applied to quantify the performance of an uncalibrated national rainfall-runoff model of New Zealand. Flow time-series observed at 485 gauging stations were used to calculate Nash-Sutcliffe efficiency and percent bias when simulating between-site differences in daily series, between-year differences in annual series, and between-site differences in hydrological signatures. The procedures were used to assess the benefit of applying a correction to the modelled flow duration curve based on an independent statistical analysis. They were used to aid understanding of climatological, hydrological and model-based causes of differences in predictive performance by assessing multiple hypotheses that describe where and when the model was expected to perform best. As the procedures produce quantitative measures of performance, they provide an objective basis for model assessment that could be applied when comparing observed daily flow series with competing simulated flow series from any region-wide or nationwide hydrological model. Model performance varied in space and time with better scores in larger and medium-wet catchments, and in catchments with smaller seasonal variations. Surprisingly, model performance was not sensitive to aquifer fraction or rain gauge density.

  17. Intern Performance in Three Supervisory Models

    ERIC Educational Resources Information Center

    Womack, Sid T.; Hanna, Shellie L.; Callaway, Rebecca; Woodall, Peggy

    2011-01-01

    Differences in intern performance, as measured by a Praxis III-similar instrument were found between interns supervised in three supervisory models: Traditional triad model, cohort model, and distance supervision. Candidates in this study's particular form of distance supervision were not as effective as teachers as candidates in traditional-triad…

  18. Displacement Models for THUNDER Actuators having General Loads and Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Wieman, Robert; Smith, Ralph C.; Kackley, Tyson; Ounaies, Zoubeida; Bernd, Jeff; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    This paper summarizes techniques for quantifying the displacements generated in THUNDER actuators in response to applied voltages for a variety of boundary conditions and exogenous loads. The PDE (partial differential equations) models for the actuators are constructed in two steps. In the first, previously developed theory quantifying thermal and electrostatic strains is employed to model the actuator shapes which result from the manufacturing process and subsequent repoling. Newtonian principles are then employed to develop PDE models which quantify displacements in the actuator due to voltage inputs to the piezoceramic patch. For this analysis, drive levels are assumed to be moderate so that linear piezoelectric relations can be employed. Finite element methods for discretizing the models are developed and the performance of the discretized models are illustrated through comparison with experimental data.

  19. Quantifying MLI Thermal Conduction in Cryogenic Applications from Experimental Data

    NASA Astrophysics Data System (ADS)

    Ross, R. G., Jr.

    2015-12-01

    Multilayer Insulation (MLI) uses stacks of low-emittance metalized sheets combined with low-conduction spacer features to greatly reduce the heat transfer to cryogenic applications from higher temperature surrounds. However, as the hot-side temperature decreases from room temperature to cryogenic temperatures, the level of radiant heat transfer drops as the fourth power of the temperature, while the heat transfer by conduction only falls off linearly. This results in cryogenic MLI being dominated by conduction, a quantity that is extremely sensitive to MLI blanket construction and very poorly quantified in the literature. To develop useful quantitative data on cryogenic blanket conduction, multilayer nonlinear heat transfer models are used to analyze extensive heat transfer data measured by Lockheed Palo Alto on their cryogenic dewar MLI and measured by JPL on their spacecraft MLI. The data-fitting aspect of the modeling allows the radiative and conductive thermal properties of the tested blankets to be explicitly quantified. Results are presented showing that MLI conductance varies by a factor of 600 between spacecraft MLI and Lockheed's best cryogenic MLI.

  20. Quantifying chaotic dynamics from integrate-and-fire processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pavlov, A. N.; Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov; Pavlova, O. N.

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periodsmore » of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.« less

  1. Mathematical models of cytotoxic effects in endpoint tumor cell line assays: critical assessment of the application of a single parametric value as a standard criterion to quantify the dose-response effects and new unexplored proposal formats.

    PubMed

    Calhelha, Ricardo C; Martínez, Mireia A; Prieto, M A; Ferreira, Isabel C F R

    2017-10-23

    The development of convenient tools for describing and quantifying the effects of standard and novel therapeutic agents is essential for the research community, to perform more precise evaluations. Although mathematical models and quantification criteria have been exchanged in the last decade between different fields of study, there are relevant methodologies that lack proper mathematical descriptions and standard criteria to quantify their responses. Therefore, part of the relevant information that can be drawn from the experimental results obtained and the quantification of its statistical reliability are lost. Despite its relevance, there is not a standard form for the in vitro endpoint tumor cell lines' assays (TCLA) that enables the evaluation of the cytotoxic dose-response effects of anti-tumor drugs. The analysis of all the specific problems associated with the diverse nature of the available TCLA used is unfeasible. However, since most TCLA share the main objectives and similar operative requirements, we have chosen the sulforhodamine B (SRB) colorimetric assay for cytotoxicity screening of tumor cell lines as an experimental case study. In this work, the common biological and practical non-linear dose-response mathematical models are tested against experimental data and, following several statistical analyses, the model based on the Weibull distribution was confirmed as the convenient approximation to test the cytotoxic effectiveness of anti-tumor compounds. Then, the advantages and disadvantages of all the different parametric criteria derived from the model, which enable the quantification of the dose-response drug-effects, are extensively discussed. Therefore, model and standard criteria for easily performing the comparisons between different compounds are established. The advantages include a simple application, provision of parametric estimations that characterize the response as standard criteria, economization of experimental effort and enabling

  2. Processing of Numerical and Proportional Quantifiers

    ERIC Educational Resources Information Center

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-01-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using…

  3. Actively heated high-resolution fiber-optic-distributed temperature sensing to quantify streambed flow dynamics in zones of strong groundwater upwelling

    USGS Publications Warehouse

    Briggs, Martin A.; Buckley, Sean F.; Bagtzoglou, Amvrossios C.; Werkema, Dale D.; Lane, John W.

    2016-01-01

    Zones of strong groundwater upwelling to streams enhance thermal stability and moderate thermal extremes, which is particularly important to aquatic ecosystems in a warming climate. Passive thermal tracer methods used to quantify vertical upwelling rates rely on downward conduction of surface temperature signals. However, moderate to high groundwater flux rates (>−1.5 m d−1) restrict downward propagation of diurnal temperature signals, and therefore the applicability of several passive thermal methods. Active streambed heating from within high-resolution fiber-optic temperature sensors (A-HRTS) has the potential to define multidimensional fluid-flux patterns below the extinction depth of surface thermal signals, allowing better quantification and separation of local and regional groundwater discharge. To demonstrate this concept, nine A-HRTS were emplaced vertically into the streambed in a grid with ∼0.40 m lateral spacing at a stream with strong upward vertical flux in Mashpee, Massachusetts, USA. Long-term (8–9 h) heating events were performed to confirm the dominance of vertical flow to the 0.6 m depth, well below the extinction of ambient diurnal signals. To quantify vertical flux, short-term heating events (28 min) were performed at each A-HRTS, and heat-pulse decay over vertical profiles was numerically modeled in radial two dimension (2-D) using SUTRA. Modeled flux values are similar to those obtained with seepage meters, Darcy methods, and analytical modeling of shallow diurnal signals. We also observed repeatable differential heating patterns along the length of vertically oriented sensors that may indicate sediment layering and hyporheic exchange superimposed on regional groundwater discharge.

  4. Quantifying Hydroperiod, Fire and Nutrient Effects on the Composition of Plant Communities in Marl Prairie of the Everglades: a Joint Probability Method Based Model

    NASA Astrophysics Data System (ADS)

    Zhai, L.

    2017-12-01

    Plant community can be simultaneously affected by human activities and climate changes, and quantifying and predicting this combined effect on plant community by appropriate model framework which is validated by field data is complex, but very useful to conservation management. Plant communities in the Everglades provide an unique set of conditions to develop and validate this model framework, because they are both experiencing intensive effects of human activities (such as changing hydroperiod by drainage and restoration projects, nutrients from upstream agriculture, prescribed fire, etc.) and climate changes (such as warming, changing precipitation patter, sea level rise, etc.). More importantly, previous research attention focuses on plant communities in slough ecosystem (including ridge, slough and their tree islands), very few studies consider the marl prairie ecosystem. Comparing with slough ecosystem featured by remaining consistently flooded almost year-round, marl prairie has relatively shorter hydroperiod (just in wet-season of one year). Therefore, plant communities of marl prairie may receive more impacts from hydroperiod change. In addition to hydroperiod, fire and nutrients also affect the plant communities in the marl prairie. Therefore, to quantify the combined effects of water level, fire, and nutrients on the composition of the plant communities, we are developing a joint probability method based vegetation dynamic model. Further, the model is being validated by field data about changes of vegetation assemblage along environmental gradients in the marl prairie. Our poster showed preliminary data from our current project.

  5. Meta-analysis of prediction model performance across multiple studies: Which scale helps ensure between-study normality for the C-statistic and calibration measures?

    PubMed

    Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D

    2017-01-01

    If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.

  6. Quantifying effects of retinal illuminance on frequency doubling perimetry.

    PubMed

    Swanson, William H; Dul, Mitchell W; Fischer, Susan E

    2005-01-01

    To measure and quantify effects of variation in retinal illuminance on frequency doubling technology (FDT) perimetry. A Zeiss-Humphrey/Welch Allyn FDT perimeter was used with the threshold N-30 strategy. Study 1, quantifying adaptation: 11 eyes of 11 subjects (24-46 years old) were tested with natural pupils, and then retested after stable pupillary dilation with neutral density filters of 0.0, 0.6, 1.2, and 1.6 log unit in front of the subject's eye. Study 2, predicting effect of reduced illuminance: 17 eyes of 17 subjects (26-61 years old) were tested with natural pupils, and then retested after stable pupillary miosis (assessed with an infrared camera). A quantitative adaptation model was fit to results of Study 1; the mean adaptation parameter was used to predict change in Study 2. Study 1: Mean defect (MD) decreased by 10 dB over a 1.6 log unit range of retinal illuminances; model fits for all subjects had r2> 95%. Study 2: Change in MD (DeltaMD) ranged from -7.3 dB to +0.8 dB. The mean adaptation parameter from Study 1 accounted for 69% of the variance in DeltaMD (P <0.0005), and accuracy of the model was independent of the magnitude of DeltaMD (r2< 1%, P >0.75). The results confirmed previous findings that FDT perimetry can be dramatically affected by variations in retinal illuminance. Application of a quantitative adaptation model provided guidelines for estimating effects of pupil diameter and lens density on FDT perimetry.

  7. Quantifying Industrial Methane Emissions from Space with the GHGSat-D Satellite

    NASA Astrophysics Data System (ADS)

    Germain, S.; Durak, B.; Gains, D.; Jervis, D.; McKeever, J.; Sloan, J. J.

    2017-12-01

    In June 2016, GHGSat, Inc. launched GHGSat-D, or "Claire", the world's first satellite capable of measuring greenhouse gas emissions from targeted industrial facilities around the world. The high-level objective of this mission is to demonstrate that a single measurement approach can quantify methane emission rates from selected industrial sources with greater precision, higher frequency, and lower cost than ground-based alternatives, across a wide range of industries. Providing industrial operators and regulators with frequent, cost-effective emission measurements can help identify super-emitters and monitor the progress of mitigation efforts. The GHGSat measurement platform is a 15 kg satellite that measures methane column densities using a novel wide-angle imaging Fabry-Perot spectrometer tuned to the 1600-1700 nm SWIR band. During each measurement sequence, a series of closely overlapping 2D images are taken so that each ground location samples a portion of the SWIR band with 0.1 nm spectral resolution. The data processing algorithm is able to co-register each image and, by comparison with a detailed forward model, perform a retrieval on each of the <50 m GSD over the entire 12 x 12 km2 field of view. Methane emission rates are then estimated using a dispersion model coupled with locally measured wind fields. We will present the economic rationale for satellite-based sensing of methane from industrial sources, introduce the GHGSat measurement concept, report on recent measurement results obtained by Claire, and describe performance upgrades planned for future missions.

  8. Shoulder arthroscopy simulator training improves shoulder arthroscopy performance in a cadaveric model.

    PubMed

    Henn, R Frank; Shah, Neel; Warner, Jon J P; Gomoll, Andreas H

    2013-06-01

    The purpose of this study was to quantify the benefits of shoulder arthroscopy simulator training with a cadaveric model of shoulder arthroscopy. Seventeen first-year medical students with no prior experience in shoulder arthroscopy were enrolled and completed this study. Each subject completed a baseline proctored arthroscopy on a cadaveric shoulder, which included controlling the camera and completing a standard series of tasks using the probe. The subjects were randomized, and 9 of the subjects received training on a virtual reality simulator for shoulder arthroscopy. All subjects then repeated the same cadaveric arthroscopy. The arthroscopic videos were analyzed in a blinded fashion for time to task completion and subjective assessment of technical performance. The 2 groups were compared by use of Student t tests, and change over time within groups was analyzed with paired t tests. There were no observed differences between the 2 groups on the baseline evaluation. The simulator group improved significantly from baseline with respect to time to completion and subjective performance (P < .05). Time to completion was significantly faster in the simulator group compared with controls at the final evaluation (P < .05). No difference was observed between the groups on the subjective scores at the final evaluation (P = .98). Shoulder arthroscopy simulator training resulted in significant benefits in clinical shoulder arthroscopy time to task completion in this cadaveric model. This study provides important additional evidence of the benefit of simulators in orthopaedic surgical training. There may be a role for simulator training in shoulder arthroscopy education. Copyright © 2013 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  9. Terrestrial laser scanning to quantify above-ground biomass of structurally complex coastal wetland vegetation

    NASA Astrophysics Data System (ADS)

    Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.

    2018-05-01

    Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.

  10. Scalar Quantifiers: Logic, Acquisition, and Processing

    ERIC Educational Resources Information Center

    Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

    2010-01-01

    Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1",…

  11. Two complementary approaches to quantify variability in heat resistance of spores of Bacillus subtilis.

    PubMed

    den Besten, Heidy M W; Berendsen, Erwin M; Wells-Bennik, Marjon H J; Straatsma, Han; Zwietering, Marcel H

    2017-07-17

    Realistic prediction of microbial inactivation in food requires quantitative information on variability introduced by the microorganisms. Bacillus subtilis forms heat resistant spores and in this study the impact of strain variability on spore heat resistance was quantified using 20 strains. In addition, experimental variability was quantified by using technical replicates per heat treatment experiment, and reproduction variability was quantified by using two biologically independent spore crops for each strain that were heat treated on different days. The fourth-decimal reduction times and z-values were estimated by a one-step and two-step model fitting procedure. Grouping of the 20 B. subtilis strains into two statistically distinguishable groups could be confirmed based on their spore heat resistance. The reproduction variability was higher than experimental variability, but both variabilities were much lower than strain variability. The model fitting approach did not significantly affect the quantification of variability. Remarkably, when strain variability in spore heat resistance was quantified using only the strains producing low-level heat resistant spores, then this strain variability was comparable with the previously reported strain variability in heat resistance of vegetative cells of Listeria monocytogenes, although in a totally other temperature range. Strains that produced spores with high-level heat resistance showed similar temperature range for growth as strains that produced low-level heat resistance. Strain variability affected heat resistance of spores most, and therefore integration of this variability factor in modelling of spore heat resistance will make predictions more realistic. Copyright © 2017. Published by Elsevier B.V.

  12. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  13. Transmutation Fuel Performance Code Thermal Model Verification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  14. Modeling trade-offs between plant fiber and toxins: a framework for quantifying risks perceived by foraging herbivores.

    PubMed

    Camp, Meghan J; Shipley, Lisa A; Johnson, Timothy R; Forbey, Jennifer Sorensen; Rachlow, Janet L; Crowell, Miranda M

    2015-12-01

    When selecting habitats, herbivores must weigh multiple risks, such as predation, starvation, toxicity, and thermal stress, forcing them to make fitness trade-offs. Here, we applied the method of paired comparisons (PC) to investigate how herbivores make trade-offs between habitat features that influence selection of food patches. The method of PC measures utility and the inverse of utility, relative risk, and makes trade-offs and indifferences explicit by forcing animals to make choices between two patches with different types of risks. Using a series of paired-choice experiments to titrate the equivalence curve and find the marginal rate of substitution for one risk over the other, we evaluated how toxin-tolerant (pygmy rabbit Brachylagus idahoensis) and fiber-tolerant (mountain cottontail rabbit Sylviagus nuttallii) herbivores differed in their hypothesized perceived risk of fiber and toxins in food. Pygmy rabbits were willing to consume nearly five times more of the toxin 1,8-cineole in their diets to avoid consuming higher levels of fiber than were mountain cottontails. Fiber posed a greater relative risk for pygmy rabbits than cottontails and cineole a greater risk for cottontails than pygmy rabbits. Our flexible modeling approach can be used to (1) quantify how animals evaluate and trade off multiple habitat attributes when the benefits and risks are difficult to quantify, and (2) integrate diverse risks that influence fitness and habitat selection into a single index of habitat value. This index potentially could be applied to landscapes to predict habitat selection across several scales.

  15. Wave and Wind Model Performance Metrics Tools

    NASA Astrophysics Data System (ADS)

    Choi, J. K.; Wang, D. W.

    2016-02-01

    Continual improvements and upgrades of Navy ocean wave and wind models are essential to the assurance of battlespace environment predictability of ocean surface wave and surf conditions in support of Naval global operations. Thus, constant verification and validation of model performance is equally essential to assure the progress of model developments and maintain confidence in the predictions. Global and regional scale model evaluations may require large areas and long periods of time. For observational data to compare against, altimeter winds and waves along the tracks from past and current operational satellites as well as moored/drifting buoys can be used for global and regional coverage. Using data and model runs in previous trials such as the planned experiment, the Dynamics of the Adriatic in Real Time (DART), we demonstrated the use of accumulated altimeter wind and wave data over several years to obtain an objective evaluation of the performance the SWAN (Simulating Waves Nearshore) model running in the Adriatic Sea. The assessment provided detailed performance of wind and wave models by using cell-averaged statistical variables maps with spatial statistics including slope, correlation, and scatter index to summarize model performance. Such a methodology is easily generalized to other regions and at global scales. Operational technology currently used by subject matter experts evaluating the Navy Coastal Ocean Model and the Hybrid Coordinate Ocean Model can be expanded to evaluate wave and wind models using tools developed for ArcMAP, a GIS application developed by ESRI. Recent inclusion of altimeter and buoy data into a format through the Naval Oceanographic Office's (NAVOCEANO) quality control system and the netCDF standards applicable to all model output makes it possible for the fusion of these data and direct model verification. Also, procedures were developed for the accumulation of match-ups of modelled and observed parameters to form a data base

  16. Performance modeling of automated manufacturing systems

    NASA Astrophysics Data System (ADS)

    Viswanadham, N.; Narahari, Y.

    A unified and systematic treatment is presented of modeling methodologies and analysis techniques for performance evaluation of automated manufacturing systems. The book is the first treatment of the mathematical modeling of manufacturing systems. Automated manufacturing systems are surveyed and three principal analytical modeling paradigms are discussed: Markov chains, queues and queueing networks, and Petri nets.

  17. Integrated Main Propulsion System Performance Reconstruction Process/Models

    NASA Technical Reports Server (NTRS)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  18. Quantifying statistical relationships between commonly used in vitro models for estimating lead bioaccessibility.

    PubMed

    Yan, Kaihong; Dong, Zhaomin; Liu, Yanju; Naidu, Ravi

    2016-04-01

    Bioaccessibility to assess potential risks resulting from exposure to Pb-contaminated soils is commonly estimated using various in vitro methods. However, existing in vitro methods yield different results depending on the composition of the extractant as well as the contaminated soils. For this reason, the relationships between the five commonly used in vitro methods, the Relative Bioavailability Leaching Procedure (RBALP), the unified BioAccessibility Research Group Europe (BARGE) method (UBM), the Solubility Bioaccessibility Research Consortium assay (SBRC), a Physiologically Based Extraction Test (PBET), and the in vitro Digestion Model (RIVM) were quantified statistically using 10 soils from long-term Pb-contaminated mining and smelter sites located in Western Australia and South Australia. For all 10 soils, the measured Pb bioaccessibility regarding all in vitro methods varied from 1.9 to 106% for gastric phase, which is higher than that for intestinal phase: 0.2 ∼ 78.6%. The variations in Pb bioaccessibility depend on the in vitro models being used, suggesting that the method chosen for bioaccessibility assessment must be validated against in vivo studies prior to use for predicting risk. Regression studies between RBALP and SRBC, RBALP and RIVM (0.06) (0.06 g of soil in each tube, S:L ratios for gastric phase and intestinal phase are 1:375 and 1:958, respectively) showed that Pb bioaccessibility based on the three methods were comparable. Meanwhile, the slopes between RBALP and UBM, RBALP and RIVM (0.6) (0.6 g soil in each tube, S:L ratios for gastric phase and intestinal phase are 1:37.5 and 1:96, respectively) were 1.21 and 1.02, respectively. The findings presented in this study could help standardize in vitro bioaccessibility measurements and provide a scientific basis for further relating Pb bioavailability and soil properties.

  19. Quantifying groundwater dependency of riparian surface hydrologic features using the exit gradient

    EPA Science Inventory

    This study examines groundwater exit gradients as a way to quantify groundwater interactions with surface water. We calibrated high resolution groundwater models for the basin fill sediments in the lower Calapooia watershed, Oregon, using data collected between 1928--2000. The e...

  20. Quantifying carbon footprint reduction opportunities for U.S. households and communities.

    PubMed

    Jones, Christopher M; Kammen, Daniel M

    2011-05-01

    Carbon management is of increasing interest to individuals, households, and communities. In order to effectively assess and manage their climate impacts, individuals need information on the financial and greenhouse gas benefits of effective mitigation opportunities. We use consumption-based life cycle accounting techniques to quantify the carbon footprints of typical U.S. households in 28 cities for 6 household sizes and 12 income brackets. The model includes emissions embodied in transportation, energy, water, waste, food, goods, and services. We further quantify greenhouse gas and financial savings from 13 potential mitigation actions across all household types. The model suggests that the size and composition of carbon footprints vary dramatically between geographic regions and within regions based on basic demographic characteristics. Despite these differences, large cash-positive carbon footprint reductions are evident across all household types and locations; however, realizing this potential may require tailoring policies and programs to different population segments with very different carbon footprint profiles. The results of this model have been incorporated into an open access online carbon footprint management tool designed to enable behavior change at the household level through personalized feedback.

  1. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  2. Quantifying the Combined Effect of Radiation Therapy and Hyperthermia in Terms of Equivalent Dose Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kok, H. Petra, E-mail: H.P.Kok@amc.uva.nl; Crezee, Johannes; Franken, Nicolaas A.P.

    2014-03-01

    Purpose: To develop a method to quantify the therapeutic effect of radiosensitization by hyperthermia; to this end, a numerical method was proposed to convert radiation therapy dose distributions with hyperthermia to equivalent dose distributions without hyperthermia. Methods and Materials: Clinical intensity modulated radiation therapy plans were created for 15 prostate cancer cases. To simulate a clinically relevant heterogeneous temperature distribution, hyperthermia treatment planning was performed for heating with the AMC-8 system. The temperature-dependent parameters α (Gy{sup −1}) and β (Gy{sup −2}) of the linear–quadratic model for prostate cancer were estimated from the literature. No thermal enhancement was assumed for normalmore » tissue. The intensity modulated radiation therapy plans and temperature distributions were exported to our in-house-developed radiation therapy treatment planning system, APlan, and equivalent dose distributions without hyperthermia were calculated voxel by voxel using the linear–quadratic model. Results: The planned average tumor temperatures T90, T50, and T10 in the planning target volume were 40.5°C, 41.6°C, and 42.4°C, respectively. The planned minimum, mean, and maximum radiation therapy doses were 62.9 Gy, 76.0 Gy, and 81.0 Gy, respectively. Adding hyperthermia yielded an equivalent dose distribution with an extended 95% isodose level. The equivalent minimum, mean, and maximum doses reflecting the radiosensitization by hyperthermia were 70.3 Gy, 86.3 Gy, and 93.6 Gy, respectively, for a linear increase of α with temperature. This can be considered similar to a dose escalation with a substantial increase in tumor control probability for high-risk prostate carcinoma. Conclusion: A model to quantify the effect of combined radiation therapy and hyperthermia in terms of equivalent dose distributions was presented. This model is particularly instructive to estimate the potential effects of interaction from

  3. Active imaging system performance model for target acquisition

    NASA Astrophysics Data System (ADS)

    Espinola, Richard L.; Teaney, Brian; Nguyen, Quang; Jacobs, Eddie L.; Halford, Carl E.; Tofsted, David H.

    2007-04-01

    The U.S. Army RDECOM CERDEC Night Vision & Electronic Sensors Directorate has developed a laser-range-gated imaging system performance model for the detection, recognition, and identification of vehicle targets. The model is based on the established US Army RDECOM CERDEC NVESD sensor performance models of the human system response through an imaging system. The Java-based model, called NVLRG, accounts for the effect of active illumination, atmospheric attenuation, and turbulence effects relevant to LRG imagers, such as speckle and scintillation, and for the critical sensor and display components. This model can be used to assess the performance of recently proposed active SWIR systems through various trade studies. This paper will describe the NVLRG model in detail, discuss the validation of recent model components, present initial trade study results, and outline plans to validate and calibrate the end-to-end model with field data through human perception testing.

  4. Quantifying and Characterizing Tonic Thermal Pain Across Subjects From EEG Data Using Random Forest Models.

    PubMed

    Vijayakumar, Vishal; Case, Michelle; Shirinpour, Sina; He, Bin

    2017-12-01

    Effective pain assessment and management strategies are needed to better manage pain. In addition to self-report, an objective pain assessment system can provide a more complete picture of the neurophysiological basis for pain. In this study, a robust and accurate machine learning approach is developed to quantify tonic thermal pain across healthy subjects into a maximum of ten distinct classes. A random forest model was trained to predict pain scores using time-frequency wavelet representations of independent components obtained from electroencephalography (EEG) data, and the relative importance of each frequency band to pain quantification is assessed. The mean classification accuracy for predicting pain on an independent test subject for a range of 1-10 is 89.45%, highest among existing state of the art quantification algorithms for EEG. The gamma band is the most important to both intersubject and intrasubject classification accuracy. The robustness and generalizability of the classifier are demonstrated. Our results demonstrate the potential of this tool to be used clinically to help us to improve chronic pain treatment and establish spectral biomarkers for future pain-related studies using EEG.

  5. Quantifying predictive capability of electronic health records for the most harmful breast cancer

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S.

    2018-03-01

    Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and LassoLR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (p<0.001). In conclusion, EHR variables can be used to predict the "most harmful" breast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.

  6. Quantifying predictive capability of electronic health records for the most harmful breast cancer.

    PubMed

    Wu, Yirong; Fan, Jun; Peissig, Peggy; Berg, Richard; Tafti, Ahmad Pahlavan; Yin, Jie; Yuan, Ming; Page, David; Cox, Jennifer; Burnside, Elizabeth S

    2018-02-01

    Improved prediction of the "most harmful" breast cancers that cause the most substantive morbidity and mortality would enable physicians to target more intense screening and preventive measures at those women who have the highest risk; however, such prediction models for the "most harmful" breast cancers have rarely been developed. Electronic health records (EHRs) represent an underused data source that has great research and clinical potential. Our goal was to quantify the value of EHR variables in the "most harmful" breast cancer risk prediction. We identified 794 subjects who had breast cancer with primary non-benign tumors with their earliest diagnosis on or after 1/1/2004 from an existing personalized medicine data repository, including 395 "most harmful" breast cancer cases and 399 "least harmful" breast cancer cases. For these subjects, we collected EHR data comprised of 6 components: demographics, diagnoses, symptoms, procedures, medications, and laboratory results. We developed two regularized prediction models, Ridge Logistic Regression (Ridge-LR) and Lasso Logistic Regression (Lasso-LR), to predict the "most harmful" breast cancer one year in advance. The area under the ROC curve (AUC) was used to assess model performance. We observed that the AUCs of Ridge-LR and Lasso-LR models were 0.818 and 0.839 respectively. For both the Ridge-LR and Lasso-LR models, the predictive performance of the whole EHR variables was significantly higher than that of each individual component (p<0.001). In conclusion, EHR variables can be used to predict the "most harmful" breast cancer, providing the possibility to personalize care for those women at the highest risk in clinical practice.

  7. A statistical model for predicting muscle performance

    NASA Astrophysics Data System (ADS)

    Byerly, Diane Leslie De Caix

    The objective of these studies was to develop a capability for predicting muscle performance and fatigue to be utilized for both space- and ground-based applications. To develop this predictive model, healthy test subjects performed a defined, repetitive dynamic exercise to failure using a Lordex spinal machine. Throughout the exercise, surface electromyography (SEMG) data were collected from the erector spinae using a Mega Electronics ME3000 muscle tester and surface electrodes placed on both sides of the back muscle. These data were analyzed using a 5th order Autoregressive (AR) model and statistical regression analysis. It was determined that an AR derived parameter, the mean average magnitude of AR poles, significantly correlated with the maximum number of repetitions (designated Rmax) that a test subject was able to perform. Using the mean average magnitude of AR poles, a test subject's performance to failure could be predicted as early as the sixth repetition of the exercise. This predictive model has the potential to provide a basis for improving post-space flight recovery, monitoring muscle atrophy in astronauts and assessing the effectiveness of countermeasures, monitoring astronaut performance and fatigue during Extravehicular Activity (EVA) operations, providing pre-flight assessment of the ability of an EVA crewmember to perform a given task, improving the design of training protocols and simulations for strenuous International Space Station assembly EVA, and enabling EVA work task sequences to be planned enhancing astronaut performance and safety. Potential ground-based, medical applications of the predictive model include monitoring muscle deterioration and performance resulting from illness, establishing safety guidelines in the industry for repetitive tasks, monitoring the stages of rehabilitation for muscle-related injuries sustained in sports and accidents, and enhancing athletic performance through improved training protocols while reducing

  8. Quantifying ubiquitin signaling.

    PubMed

    Ordureau, Alban; Münch, Christian; Harper, J Wade

    2015-05-21

    Ubiquitin (UB)-driven signaling systems permeate biology, and are often integrated with other types of post-translational modifications (PTMs), including phosphorylation. Flux through such pathways is dictated by the fractional stoichiometry of distinct modifications and protein assemblies as well as the spatial organization of pathway components. Yet, we rarely understand the dynamics and stoichiometry of rate-limiting intermediates along a reaction trajectory. Here, we review how quantitative proteomic tools and enrichment strategies are being used to quantify UB-dependent signaling systems, and to integrate UB signaling with regulatory phosphorylation events, illustrated with the PINK1/PARKIN pathway. A key feature of ubiquitylation is that the identity of UB chain linkage types can control downstream processes. We also describe how proteomic and enzymological tools can be used to identify and quantify UB chain synthesis and linkage preferences. The emergence of sophisticated quantitative proteomic approaches will set a new standard for elucidating biochemical mechanisms of UB-driven signaling systems. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Quantifying Ubiquitin Signaling

    PubMed Central

    Ordureau, Alban; Münch, Christian; Harper, J. Wade

    2015-01-01

    Ubiquitin (UB)-driven signaling systems permeate biology, and are often integrated with other types of post-translational modifications (PTMs), most notably phosphorylation. Flux through such pathways is typically dictated by the fractional stoichiometry of distinct regulatory modifications and protein assemblies as well as the spatial organization of pathway components. Yet, we rarely understand the dynamics and stoichiometry of rate-limiting intermediates along a reaction trajectory. Here, we review how quantitative proteomic tools and enrichment strategies are being used to quantify UB-dependent signaling systems, and to integrate UB signaling with regulatory phosphorylation events. A key regulatory feature of ubiquitylation is that the identity of UB chain linkage types can control downstream processes. We also describe how proteomic and enzymological tools can be used to identify and quantify UB chain synthesis and linkage preferences. The emergence of sophisticated quantitative proteomic approaches will set a new standard for elucidating biochemical mechanisms of UB-driven signaling systems. PMID:26000850

  10. MorphoGraphX: A platform for quantifying morphogenesis in 4D.

    PubMed

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-05-06

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.

  11. Information-Theoretic Performance Analysis of Sensor Networks via Markov Modeling of Time Series Data.

    PubMed

    Li, Yue; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Yue Li; Jha, Devesh K; Ray, Asok; Wettergren, Thomas A; Wettergren, Thomas A; Li, Yue; Ray, Asok; Jha, Devesh K

    2018-06-01

    This paper presents information-theoretic performance analysis of passive sensor networks for detection of moving targets. The proposed method falls largely under the category of data-level information fusion in sensor networks. To this end, a measure of information contribution for sensors is formulated in a symbolic dynamics framework. The network information state is approximately represented as the largest principal component of the time series collected across the network. To quantify each sensor's contribution for generation of the information content, Markov machine models as well as x-Markov (pronounced as cross-Markov) machine models, conditioned on the network information state, are constructed; the difference between the conditional entropies of these machines is then treated as an approximate measure of information contribution by the respective sensors. The x-Markov models represent the conditional temporal statistics given the network information state. The proposed method has been validated on experimental data collected from a local area network of passive sensors for target detection, where the statistical characteristics of environmental disturbances are similar to those of the target signal in the sense of time scale and texture. A distinctive feature of the proposed algorithm is that the network decisions are independent of the behavior and identity of the individual sensors, which is desirable from computational perspectives. Results are presented to demonstrate the proposed method's efficacy to correctly identify the presence of a target with very low false-alarm rates. The performance of the underlying algorithm is compared with that of a recent data-driven, feature-level information fusion algorithm. It is shown that the proposed algorithm outperforms the other algorithm.

  12. Assessment of multislice CT to quantify pulmonary emphysema function and physiology in a rat model

    NASA Astrophysics Data System (ADS)

    Cao, Minsong; Stantz, Keith M.; Liang, Yun; Krishnamurthi, Ganapathy; Presson, Robert G., Jr.

    2005-04-01

    Purpose: The purpose of this study is to evaluate multi-slice computed tomography technology to quantify functional and physiologic changes in rats with pulmonary emphysema. Method: Seven rats were scanned using a 16-slice CT (Philips MX8000 IDT) before and after artificial inducement of emphysema. Functional parameters i.e. lung volumes were measured by non-contrast spiral scan during forced breath-hold at inspiration and expiration followed by image segmentation based on attenuation threshold. Dynamic CT imaging was performed immediately following the contrast injection to estimate physiology changes. Pulmonary perfusion, fractional blood volume, and mean transit times (MTTs) were estimated by fitting the time-density curves of contrast material using a compartmental model. Results: The preliminary results indicated that the lung volumes of emphysema rats increased by 3.52+/-1.70mL (p<0.002) at expiration and 4.77+/-3.34mL (p<0.03) at inspiration. The mean lung densities of emphysema rats decreased by 91.76+/-68.11HU (p<0.01) at expiration and low attenuation areas increased by 5.21+/-3.88% (p<0.04) at inspiration compared with normal rats. The perfusion for normal and emphysema rats were 0.25+/-0.04ml/s/ml and 0.32+/-0.09ml/s/ml respectively. The fractional blood volumes for normal and emphysema rats were 0.21+/-0.04 and 0.15+/-0.02. There was a trend toward faster MTTs for emphysema rats (0.42+/-0.08s) than normal rats (0.89+/-0.19s) with p<0.006, suggesting that blood flow crossing the capillaries increases as the capillary volume decreases and which may cause the red blood cells to leave the capillaries incompletely saturated with oxygen if the MTTs become too short. Conclusion: Quantitative measurement using CT of structural and functional changes in pulmonary emphysema appears promising for small animals.

  13. Performance Prediction of a Synchronization Link for Distributed Aerospace Wireless Systems

    PubMed Central

    Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link. PMID:23970828

  14. Performance prediction of a synchronization link for distributed aerospace wireless systems.

    PubMed

    Wang, Wen-Qin; Shao, Huaizong

    2013-01-01

    For reasons of stealth and other operational advantages, distributed aerospace wireless systems have received much attention in recent years. In a distributed aerospace wireless system, since the transmitter and receiver placed on separated platforms which use independent master oscillators, there is no cancellation of low-frequency phase noise as in the monostatic cases. Thus, high accurate time and frequency synchronization techniques are required for distributed wireless systems. The use of a dedicated synchronization link to quantify and compensate oscillator frequency instability is investigated in this paper. With the mathematical statistical models of phase noise, closed-form analytic expressions for the synchronization link performance are derived. The possible error contributions including oscillator, phase-locked loop, and receiver noise are quantified. The link synchronization performance is predicted by utilizing the knowledge of the statistical models, system error contributions, and sampling considerations. Simulation results show that effective synchronization error compensation can be achieved by using this dedicated synchronization link.

  15. Developing an Energy Performance Modeling Startup Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, A.

    2012-10-01

    In 2011, the NAHB Research Center began the first part of the multi-year effort by assessing the needs and motivations of residential remodelers regarding energy performance remodeling. The scope is multifaceted - all perspectives will be sought related to remodeling firms ranging in size from small-scale, sole proprietor to national. This will allow the Research Center to gain a deeper understanding of the remodeling and energy retrofit business and the needs of contractors when offering energy upgrade services. To determine the gaps and the motivation for energy performance remodeling, the NAHB Research Center conducted (1) an initial series of focusmore » groups with remodelers at the 2011 International Builders' Show, (2) a second series of focus groups with remodelers at the NAHB Research Center in conjunction with the NAHB Spring Board meeting in DC, and (3) quantitative market research with remodelers based on the findings from the focus groups. The goal was threefold, to: Understand the current remodeling industry and the role of energy efficiency; Identify the gaps and barriers to adding energy efficiency into remodeling; and Quantify and prioritize the support needs of professional remodelers to increase sales and projects involving improving home energy efficiency. This report outlines all three of these tasks with remodelers.« less

  16. Quantifying performance and effects of load carriage during a challenging balancing task using an array of wireless inertial sensors.

    PubMed

    Cain, Stephen M; McGinnis, Ryan S; Davidson, Steven P; Vitali, Rachel V; Perkins, Noel C; McLean, Scott G

    2016-01-01

    We utilize an array of wireless inertial measurement units (IMUs) to measure the movements of subjects (n=30) traversing an outdoor balance beam (zigzag and sloping) as quickly as possible both with and without load (20.5kg). Our objectives are: (1) to use IMU array data to calculate metrics that quantify performance (speed and stability) and (2) to investigate the effects of load on performance. We hypothesize that added load significantly decreases subject speed yet results in increased stability of subject movements. We propose and evaluate five performance metrics: (1) time to cross beam (less time=more speed), (2) percentage of total time spent in double support (more double support time=more stable), (3) stride duration (longer stride duration=more stable), (4) ratio of sacrum M-L to A-P acceleration (lower ratio=less lateral balance corrections=more stable), and (5) M-L torso range of motion (smaller range of motion=less balance corrections=more stable). We find that the total time to cross the beam increases with load (t=4.85, p<0.001). Stability metrics also change significantly with load, all indicating increased stability. In particular, double support time increases (t=6.04, p<0.001), stride duration increases (t=3.436, p=0.002), the ratio of sacrum acceleration RMS decreases (t=-5.56, p<0.001), and the M-L torso lean range of motion decreases (t=-2.82, p=0.009). Overall, the IMU array successfully measures subject movement and gait parameters that reveal the trade-off between speed and stability in this highly dynamic balance task. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Quantifying the sensitivity of post-glacial sea level change to laterally varying viscosity

    NASA Astrophysics Data System (ADS)

    Crawford, Ophelia; Al-Attar, David; Tromp, Jeroen; Mitrovica, Jerry X.; Austermann, Jacqueline; Lau, Harriet C. P.

    2018-05-01

    We present a method for calculating the derivatives of measurements of glacial isostatic adjustment (GIA) with respect to the viscosity structure of the Earth and the ice sheet history. These derivatives, or kernels, quantify the linearised sensitivity of measurements to the underlying model parameters. The adjoint method is used to enable efficient calculation of theoretically exact sensitivity kernels within laterally heterogeneous earth models that can have a range of linear or non-linear viscoelastic rheologies. We first present a new approach to calculate GIA in the time domain, which, in contrast to the more usual formulation in the Laplace domain, is well suited to continuously varying earth models and to the use of the adjoint method. Benchmarking results show excellent agreement between our formulation and previous methods. We illustrate the potential applications of the kernels calculated in this way through a range of numerical calculations relative to a spherically symmetric background model. The complex spatial patterns of the sensitivities are not intuitive, and this is the first time that such effects are quantified in an efficient and accurate manner.

  18. Multimodel inference to quantify the relative importance of abiotic factors in the population dynamics of marine zooplankton

    NASA Astrophysics Data System (ADS)

    Everaert, Gert; Deschutter, Yana; De Troch, Marleen; Janssen, Colin R.; De Schamphelaere, Karel

    2018-05-01

    The effect of multiple stressors on marine ecosystems remains poorly understood and most of the knowledge available is related to phytoplankton. To partly address this knowledge gap, we tested if combining multimodel inference with generalized additive modelling could quantify the relative contribution of environmental variables on the population dynamics of a zooplankton species in the Belgian part of the North Sea. Hence, we have quantified the relative contribution of oceanographic variables (e.g. water temperature, salinity, nutrient concentrations, and chlorophyll a concentrations) and anthropogenic chemicals (i.e. polychlorinated biphenyls) to the density of Acartia clausi. We found that models with water temperature and chlorophyll a concentration explained ca. 73% of the population density of the marine copepod. Multimodel inference in combination with regression-based models are a generic way to disentangle and quantify multiple stressor-induced changes in marine ecosystems. Future-oriented simulations of copepod densities suggested increased copepod densities under predicted environmental changes.

  19. Quantifying the effects of pesticide exposure on annual reproductive success of birds

    EPA Science Inventory

    The Markov chain nest productivity model (MCnest) was developed for quantifying the effects of specific pesticide-use scenarios on the annual reproductive success of simulated populations of birds. Each nesting attempt is divided into a series of discrete phases (e.g., egg layin...

  20. Petascale computation performance of lightweight multiscale cardiac models using hybrid programming models.

    PubMed

    Pope, Bernard J; Fitch, Blake G; Pitman, Michael C; Rice, John J; Reumann, Matthias

    2011-01-01

    Future multiscale and multiphysics models must use the power of high performance computing (HPC) systems to enable research into human disease, translational medical science, and treatment. Previously we showed that computationally efficient multiscale models will require the use of sophisticated hybrid programming models, mixing distributed message passing processes (e.g. the message passing interface (MPI)) with multithreading (e.g. OpenMP, POSIX pthreads). The objective of this work is to compare the performance of such hybrid programming models when applied to the simulation of a lightweight multiscale cardiac model. Our results show that the hybrid models do not perform favourably when compared to an implementation using only MPI which is in contrast to our results using complex physiological models. Thus, with regards to lightweight multiscale cardiac models, the user may not need to increase programming complexity by using a hybrid programming approach. However, considering that model complexity will increase as well as the HPC system size in both node count and number of cores per node, it is still foreseeable that we will achieve faster than real time multiscale cardiac simulations on these systems using hybrid programming models.

  1. Evaluating Models of Human Performance: Safety-Critical Systems Applications

    NASA Technical Reports Server (NTRS)

    Feary, Michael S.

    2012-01-01

    This presentation is part of panel discussion on Evaluating Models of Human Performance. The purpose of this panel is to discuss the increasing use of models in the world today and specifically focus on how to describe and evaluate models of human performance. My presentation will focus on discussions of generating distributions of performance, and the evaluation of different strategies for humans performing tasks with mixed initiative (Human-Automation) systems. I will also discuss issues with how to provide Human Performance modeling data to support decisions on acceptability and tradeoffs in the design of safety critical systems. I will conclude with challenges for the future.

  2. A hybrid framework for quantifying the influence of data in hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David

    2018-06-01

    Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective function with the weighted least squares (WLS) objective function on a 10 year calibration period. In two out of three flow

  3. Quantifying three dimensional reconnection in fragmented current layers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wyper, P. F., E-mail: peter.f.wyper@nasa.gov; Hesse, M., E-mail: michael.hesse-1@nasa.gov

    There is growing evidence that when magnetic reconnection occurs in high Lundquist number plasmas such as in the Solar Corona or the Earth's Magnetosphere it does so within a fragmented, rather than a smooth current layer. Within the extent of these fragmented current regions, the associated magnetic flux transfer and energy release occur simultaneously in many different places. This investigation focusses on how best to quantify the rate at which reconnection occurs in such layers. An analytical theory is developed which describes the manner in which new connections form within fragmented current layers in the absence of magnetic nulls. Itmore » is shown that the collective rate at which new connections form can be characterized by two measures; a total rate which measures the true rate at which new connections are formed and a net rate which measures the net change of connection associated with the largest value of the integral of E{sub ||} through all of the non-ideal regions. Two simple analytical models are presented which demonstrate how each should be applied and what they quantify.« less

  4. Modeling of Spark Gap Performance

    DTIC Science & Technology

    1983-06-01

    MODELING OF SPARK GAP PERFORMANCE* A. L. Donaldson, R. Ness, M. Hagler, M. Kristiansen Department of Electrical Engineering and L. L. Hatfield...gas pressure, and chaJ:ging rate on the voltage stability of high energy spark gaps is discussed. Implications of the model include changes in...an extremely useful, and physically reasonable framework, from which the properties of spark gaps under a wide variety of experimental conditions

  5. Quantifying Stove Emissions Related to Different Use Patterns for the Silver mini (Small Turkish) Space Heating Stove

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy; Lunden, Melissa; Wilson, Daniel

    2012-08-01

    Air pollution levels in Ulaanbaatar, Mongolia’s capital, are among the highest in the world. A primary source of this pollution is emissions from traditional coal - burning space heating stoves used in the Ger (tent) regions around Ulaanbaatar. Significant investment has been made to replace traditional heating stoves with improved low - emission high-efficiency stoves. Testing performed to support selection of replacement stoves or for optimizing performance may not be representative of true field performance of the improved stoves. Field observations and lab measurements indicate that performance is impacted , often adversely, by how stoves are actually being used inmore » the field. The objective of this project is to identify factors that influence stove emissions under typical field operating conditions and to quantify the impact of these factors. A highly - instrumented stove testing facility was constructed to allow for rapid and precise adjustment of factors influencing stove performance. Tests were performed using one of the improved stove models currently available in Ulaanbaatar. Complete burn cycles were conducted with Nailakh coal from the Ulaanbaatar region using various startup parameters, refueling conditions , and fuel characteristics . Measurements were collected simultaneously from undiluted chimney gas, diluted gas drawn directly from the chimney and plume gas collected from a dilution tunnel above the chimney. CO, CO 2, O 2, temperature, pressure, and particulate matter (PM) were measured . We found that both refueling events and coal characteristics strongly influenced PM emissions and stove performance. Start-up and refueling events lead to increased PM emissions with more than 98% of PM mass emitted during the 20% of the burn where coal ignition occurs. CO emissions are distributed more evenly over the burn cycle, peaking both during ignition and late in the burn cycle . We anticipate these results being useful for quantifying public

  6. A new method for quantifying the performance of EEG blind source separation algorithms by referencing a simultaneously recorded ECoG signal.

    PubMed

    Oosugi, Naoya; Kitajo, Keiichi; Hasegawa, Naomi; Nagasaka, Yasuo; Okanoya, Kazuo; Fujii, Naotaka

    2017-09-01

    Blind source separation (BSS) algorithms extract neural signals from electroencephalography (EEG) data. However, it is difficult to quantify source separation performance because there is no criterion to dissociate neural signals and noise in EEG signals. This study develops a method for evaluating BSS performance. The idea is neural signals in EEG can be estimated by comparison with simultaneously measured electrocorticography (ECoG). Because the ECoG electrodes cover the majority of the lateral cortical surface and should capture most of the original neural sources in the EEG signals. We measured real EEG and ECoG data and developed an algorithm for evaluating BSS performance. First, EEG signals are separated into EEG components using the BSS algorithm. Second, the EEG components are ranked using the correlation coefficients of the ECoG regression and the components are grouped into subsets based on their ranks. Third, canonical correlation analysis estimates how much information is shared between the subsets of the EEG components and the ECoG signals. We used our algorithm to compare the performance of BSS algorithms (PCA, AMUSE, SOBI, JADE, fastICA) via the EEG and ECoG data of anesthetized nonhuman primates. The results (Best case >JADE = fastICA >AMUSE = SOBI ≥ PCA >random separation) were common to the two subjects. To encourage the further development of better BSS algorithms, our EEG and ECoG data are available on our Web site (http://neurotycho.org/) as a common testing platform. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Quantifying similarity of pore-geometry in nanoporous materials

    DOE PAGES

    Lee, Yongjin; Barthel, Senja D.; Dłotko, Paweł; ...

    2017-05-23

    In most applications of nanoporous materials the pore structure is as important as the chemical composition as a determinant of performance. For example, one can alter performance in applications like carbon capture or methane storage by orders of magnitude by only modifying the pore structure. For these applications it is therefore important to identify the optimal pore geometry and use this information to find similar materials. But, the mathematical language and tools to identify materials with similar pore structures, but different composition, has been lacking. We develop a pore recognition approach to quantify similarity of pore structures and classify themmore » using topological data analysis. This then allows us to identify materials with similar pore geometries, and to screen for materials that are similar to given top-performing structures. Using methane storage as a case study, we also show that materials can be divided into topologically distinct classes requiring different optimization strategies.« less

  8. Quantifying the Adaptive Cycle.

    PubMed

    Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  9. Quantifying the adaptive cycle

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  10. Quantifying renewable groundwater stress with GRACE

    NASA Astrophysics Data System (ADS)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min-Hui; Reager, John T.; Famiglietti, James S.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-07-01

    Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions.

  11. Quantifying renewable groundwater stress with GRACE

    PubMed Central

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Reager, John T.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human‐dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE‐based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions. PMID:26900185

  12. Quantifying Grain Level Stress-Strain Behavior for AM40 via Instrumented Microindentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Guang; Barker, Erin I.; Stephens, Elizabeth V.

    2016-01-01

    ABSTRACT Microindentation is performed on hot isostatic pressed (HIP) Mg-Al (AM40) alloy samples produced by high-pressure die cast (HPDC) process for the purpose of quantifying the mechanical properties of the α-Mg grains. The process of obtaining elastic modulus and hardness from indentation load-depth curves is well established in the literature. A new inverse method is developed to extract plastic properties in this study. The method utilizes empirical yield strength-hardness relationship reported in the literature together with finite element modeling of the individual indentation. Due to the shallow depth of the indentation, indentation size effect (ISE) is taken into account whenmore » determining plastic properties. The stress versus strain behavior is determined for a series of indents. The resulting average values and standard deviations are obtained for future use as input distributions for microstructure-based property prediction of AM40.« less

  13. Quantifying Qualitative Learning.

    ERIC Educational Resources Information Center

    Bogus, Barbara

    1995-01-01

    A teacher at an alternative school for at-risk students discusses the development of student assessment that increases students' self-esteem, convinces students that learning is fun, and prepares students to return to traditional school settings. She found that allowing students to participate in the assessment process successfully quantified the…

  14. Dendritic network models: Improving isoscapes and quantifying influence of landscape and in-stream processes on strontium isotopes in rivers

    USGS Publications Warehouse

    Brennan, Sean R.; Torgersen, Christian E.; Hollenbeck, Jeff P.; Fernandez, Diego P.; Jensen, Carrie K; Schindler, Daniel E.

    2016-01-01

    A critical challenge for the Earth sciences is to trace the transport and flux of matter within and among aquatic, terrestrial, and atmospheric systems. Robust descriptions of isotopic patterns across space and time, called “isoscapes,” form the basis of a rapidly growing and wide-ranging body of research aimed at quantifying connectivity within and among Earth's systems. However, isoscapes of rivers have been limited by conventional Euclidean approaches in geostatistics and the lack of a quantitative framework to apportion the influence of processes driven by landscape features versus in-stream phenomena. Here we demonstrate how dendritic network models substantially improve the accuracy of isoscapes of strontium isotopes and partition the influence of hydrologic transport versus local geologic features on strontium isotope ratios in a large Alaska river. This work illustrates the analytical power of dendritic network models for the field of isotope biogeochemistry, particularly for provenance studies of modern and ancient animals.

  15. Human performance modeling for system of systems analytics: combat performance-shaping factors.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawton, Craig R.; Miller, Dwight Peter

    The US military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives. To support this goal, Sandia National Laboratories (SNL) has undertaken a program of HPM as an integral augmentation to its system-of-system (SoS) analytics capabilities. The previous effort, reported in SAND2005-6569, evaluated the effects of soldier cognitive fatigue on SoS performance. The current effort began with a very broad survey of any performance-shaping factors (PSFs) that also might affect soldiers performance in combat situations. The work included consideration of three different approaches to cognition modeling and how appropriate theymore » would be for application to SoS analytics. This bulk of this report categorizes 47 PSFs into three groups (internal, external, and task-related) and provides brief descriptions of how each affects combat performance, according to the literature. The PSFs were then assembled into a matrix with 22 representative military tasks and assigned one of four levels of estimated negative impact on task performance, based on the literature. Blank versions of the matrix were then sent to two ex-military subject-matter experts to be filled out based on their personal experiences. Data analysis was performed to identify the consensus most influential PSFs. Results indicate that combat-related injury, cognitive fatigue, inadequate training, physical fatigue, thirst, stress, poor perceptual processing, and presence of chemical agents are among the PSFs with the most negative impact on combat performance.« less

  16. Quantifying the evolution of individual scientific impact.

    PubMed

    Sinatra, Roberta; Wang, Dashun; Deville, Pierre; Song, Chaoming; Barabási, Albert-László

    2016-11-04

    Despite the frequent use of numerous quantitative indicators to gauge the professional impact of a scientist, little is known about how scientific impact emerges and evolves in time. Here, we quantify the changes in impact and productivity throughout a career in science, finding that impact, as measured by influential publications, is distributed randomly within a scientist's sequence of publications. This random-impact rule allows us to formulate a stochastic model that uncouples the effects of productivity, individual ability, and luck and unveils the existence of universal patterns governing the emergence of scientific success. The model assigns a unique individual parameter Q to each scientist, which is stable during a career, and it accurately predicts the evolution of a scientist's impact, from the h-index to cumulative citations, and independent recognitions, such as prizes. Copyright © 2016, American Association for the Advancement of Science.

  17. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  18. Performance Models for the Spike Banded Linear System Solver

    DOE PAGES

    Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; ...

    2011-01-01

    With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated

  19. SOME IS NOT ENOUGH: QUANTIFIER COMPREHENSION IN CORTICOBASAL SYNDROME AND BEHAVIORAL VARIANT FRONTOTEMPORAL DEMENTIA

    PubMed Central

    Morgan, Brianna; Gross, Rachel; Clark, Robin; Dreyfuss, Michael; Boller, Ashley; Camp, Emily; Liang, Tsao-Wei; Avants, Brian; McMillan, Corey; Grossman, Murray

    2011-01-01

    Quantifiers are very common in everyday speech, but we know little about their cognitive basis or neural representation. The present study examined comprehension of three classes of quantifiers that depend on different cognitive components in patients with focal neurodegenerative diseases. Patients evaluated the truth-value of a sentence containing a quantifier relative to a picture illustrating a small number of familiar objects, and performance was related to MRI grey matter atrophy using voxel-based morphometry. We found that patients with corticobasal syndrome (CBS) and posterior cortical atrophy (PCA) are significantly impaired in their comprehension of Cardinal Quantifiers (e.g. “At least three birds are on the branch”), due in part to their deficit in quantity knowledge. MRI analyses related this deficit to temporal-parietal atrophy found in CBS/PCA. We also found that patients with behavioral variant frontotemporal dementia (bvFTD) are significantly impaired in their comprehension of Logical Quantifiers (e.g. “Some the birds are on the branch”), associated with a simple form of perceptual logic, and this correlated with their deficit on executive measures. This deficit was related to disease in rostral prefrontal cortex in bvFTD. These patients were also impaired in their comprehension of Majority Quantifiers (e.g. “At least half of the birds are on the branch”), and this too was correlated with their deficit on executive measures. This was related to disease in the basal ganglia interrupting a frontal-striatal loop critical for executive functioning. These findings suggest that a large-scale frontal-parietal neural network plays a crucial role in quantifier comprehension, and that comprehension of specific classes of quantifiers may be selectively impaired in patients with focal neurodegenerative conditions in these areas. PMID:21930136

  20. Quantifiers are incrementally interpreted in context, more than less

    PubMed Central

    Urbach, Thomas P.; DeLong, Katherine A.; Kutas, Marta

    2015-01-01

    Language interpretation is often assumed to be incremental. However, our studies of quantifier expressions in isolated sentences found N400 event-related brain potential (ERP) evidence for partial but not full immediate quantifier interpretation (Urbach & Kutas, 2010). Here we tested similar quantifier expressions in pragmatically supporting discourse contexts (Alex was an unusual toddler. Most/Few kids prefer sweets/vegetables…) while participants made plausibility judgments (Experiment 1) or read for comprehension (Experiment 2). Control Experiments 3A (plausibility) and 3B (comprehension) removed the discourse contexts. Quantifiers always modulated typical and/or atypical word N400 amplitudes. However, only the real-time N400 effects only in Experiment 2 mirrored offline quantifier and typicality crossover interaction effects for plausibility ratings and cloze probabilities. We conclude that quantifier expressions can be interpreted fully and immediately, though pragmatic and task variables appear to impact the speed and/or depth of quantifier interpretation. PMID:26005285

  1. Review of Methods for Buildings Energy Performance Modelling

    NASA Astrophysics Data System (ADS)

    Krstić, Hrvoje; Teni, Mihaela

    2017-10-01

    Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive

  2. Quantifying climatic controls on river network topology across scales

    NASA Astrophysics Data System (ADS)

    Ranjbar Moshfeghi, S.; Hooshyar, M.; Wang, D.; Singh, A.

    2017-12-01

    Branching structure of river networks is an important topologic and geomorphologic feature that depends on several factors (e.g. climate, tectonic). However, mechanisms that cause these drainage patterns in river networks are poorly understood. In this study, we investigate the effects of varying climatic forcing on river network topology and geomorphology. For this, we select 20 catchments across the United States with different long-term climatic conditions quantified by climate aridity index (AI), defined here as the ratio of mean annual potential evaporation (Ep) to precipitation (P), capturing variation in runoff and vegetation cover. The river networks of these catchments are extracted, using a curvature-based method, from high-resolution (1 m) digital elevation models and several metrics such as drainage density, branching angle, and width functions are computed. We also use a multiscale-entropy-based approach to quantify the topologic irregularity and structural richness of these river networks. Our results reveal systematic impacts of climate forcing on the structure of river networks.

  3. Digital Troposcatter Performance Model

    DTIC Science & Technology

    1983-12-01

    Dist Speia DIIBUTON STATEMR AO Approved tot public relemg ** - DistributionUnlimited __________ Communications. Control and Information Systems ...for digital troposcatter communication system design is described. Propagation and modem performance *are modeled. These include Path Loss and RSL...designing digital troposcatter systems . A User’s Manual Report discusses the use of the computer program TROPO. The description of the structure and logical

  4. Advanced flight design systems subsystem performance models. Sample model: Environmental analysis routine library

    NASA Technical Reports Server (NTRS)

    Parker, K. C.; Torian, J. G.

    1980-01-01

    A sample environmental control and life support model performance analysis using the environmental analysis routines library is presented. An example of a complete model set up and execution is provided. The particular model was synthesized to utilize all of the component performance routines and most of the program options.

  5. System cost performance analysis (study 2.3). Volume 1: Executive summary. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.

  6. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into

  7. On the Role of Water Models in Quantifying the Binding Free Energy of Highly Conserved Water Molecules in Proteins: The Case of Concanavalin A.

    PubMed

    Fadda, Elisa; Woods, Robert J

    2011-10-11

    The ability of ligands to displace conserved water molecules in protein binding sites is of significant interest in drug design and is particularly pertinent in the case of glycomimetic drugs. This concept was explored in previous work [ Clarke et al. J. Am. Chem. Soc. 2001 , 123 , 12238 - 12247 and Kadirvelraj et al. J. Am. Chem. Soc. 2008 , 130 , 16933 - 16942 ] for a highly conserved water molecule located in the binding site of the prototypic carbohydrate-binding protein Concanavalin A (Con A). A synthetic ligand was designed with the aim of displacing such water. While the synthetic ligand bound to Con A in an analogous manner to that of the natural ligand, crystallographic analysis demonstrated that it did not displace the conserved water. In order to quantify the affinity of this particular water for the Con A surface, we report here the calculated standard binding free energy for this water in both ligand-bound and free Con A, employing three popular water models: TIP3P, TIP4P, and TIP5P. Although each model was developed to perform well in simulations of bulk-phase water, the computed binding energies for the isolated water molecule displayed a high sensitivity to the model. Both molecular dynamics simulation and free energy results indicate that the choice of water model may greatly influence the characterization of surface water molecules as conserved (TIP5P) or not (TIP3P) in protein binding sites, an observation of considerable significance to rational drug design. Structural and theoretical aspects at the basis of the different behaviors are identified and discussed.

  8. Quantifying temporal isolation: a modelling approach assessing the effect of flowering time differences on crop-to-weed pollen flow in sunflower

    PubMed Central

    Roumet, Marie; Cayre, Adeline; Latreille, Muriel; Muller, Marie-Hélène

    2015-01-01

    Flowering time divergence can be a crucial component of reproductive isolation between sympatric populations, but few studies have quantified its actual contribution to the reduction of gene flow. In this study, we aimed at estimating pollen-mediated gene flow between cultivated sunflower and a weedy conspecific sunflower population growing in the same field and at quantifying, how it is affected by the weeds' flowering time. For that purpose, we extended an existing mating model by including a temporal distance (i.e. flowering time difference between potential parents) effect on mating probabilities. Using phenological and genotypic data gathered on the crop and on a sample of the weedy population and its offspring, we estimated an average hybridization rate of approximately 10%. This rate varied strongly from 30% on average for weeds flowering at the crop flowering peak to 0% when the crop finished flowering and was affected by the local density of weeds. Our result also suggested the occurrence of other factors limiting crop-to-weed gene flow. This level of gene flow and its dependence on flowering time might influence the evolutionary fate of weedy sunflower populations sympatric to their crop relative. PMID:25667603

  9. Quantifying CO2 Emissions From Individual Power Plants From Space

    NASA Astrophysics Data System (ADS)

    Nassar, Ray; Hill, Timothy G.; McLinden, Chris A.; Wunch, Debra; Jones, Dylan B. A.; Crisp, David

    2017-10-01

    In order to better manage anthropogenic CO2 emissions, improved methods of quantifying emissions are needed at all spatial scales from the national level down to the facility level. Although the Orbiting Carbon Observatory 2 (OCO-2) satellite was not designed for monitoring power plant emissions, we show that in some cases, CO2 observations from OCO-2 can be used to quantify daily CO2 emissions from individual middle- to large-sized coal power plants by fitting the data to plume model simulations. Emission estimates for U.S. power plants are within 1-17% of reported daily emission values, enabling application of the approach to international sites that lack detailed emission information. This affirms that a constellation of future CO2 imaging satellites, optimized for point sources, could monitor emissions from individual power plants to support the implementation of climate policies.

  10. An Exploration of Cognitive Agility as Quantified by Attention Allocation in a Complex Environment

    DTIC Science & Technology

    2017-03-01

    quantified by eye-tracking data collected while subjects played a military-relevant cognitive agility computer game (Make Goal), to determine whether...subjects played a military-relevant cognitive agility computer game (Make Goal), to determine whether certain patterns are associated with effective...Group and Control Group on Eye Tracking and Game Performance .....................36 3. Comparison between High and Low Performers on Eye tracking and

  11. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  12. Practical Techniques for Modeling Gas Turbine Engine Performance

    NASA Technical Reports Server (NTRS)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  13. Quantifying Evaporation in a Permeable Pavement System

    EPA Science Inventory

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  14. Quantifying differences in land use emission estimates implied by definition discrepancies

    NASA Astrophysics Data System (ADS)

    Stocker, B. D.; Joos, F.

    2015-11-01

    The quantification of CO2 emissions from anthropogenic land use and land use change (eLUC) is essential to understand the drivers of the atmospheric CO2 increase and to inform climate change mitigation policy. Reported values in synthesis reports are commonly derived from different approaches (observation-driven bookkeeping and process-modelling) but recent work has emphasized that inconsistencies between methods may imply substantial differences in eLUC estimates. However, a consistent quantification is lacking and no concise modelling protocol for the separation of primary and secondary components of eLUC has been established. Here, we review differences of eLUC quantification methods and apply an Earth System Model (ESM) of Intermediate Complexity to quantify them. We find that the magnitude of effects due to merely conceptual differences between ESM and offline vegetation model-based quantifications is ~ 20 % for today. Under a future business-as-usual scenario, differences tend to increase further due to slowing land conversion rates and an increasing impact of altered environmental conditions on land-atmosphere fluxes. We establish how coupled Earth System Models may be applied to separate secondary component fluxes of eLUC arising from the replacement of potential C sinks/sources and the land use feedback and show that secondary fluxes derived from offline vegetation models are conceptually and quantitatively not identical to either, nor their sum. Therefore, we argue that synthesis studies should resort to the "least common denominator" of different methods, following the bookkeeping approach where only primary land use emissions are quantified under the assumption of constant environmental boundary conditions.

  15. Multi-Scale Multi-Domain Model | Transportation Research | NREL

    Science.gov Websites

    framework for NREL's MSMD model. NREL's MSMD model quantifies the impacts of electrical/thermal pathway : NREL Macroscopic design factors and highly dynamic environmental conditions significantly influence the design of affordable, long-lasting, high-performing, and safe large battery systems. The MSMD framework

  16. Metrics to quantify the importance of mixing state for CCN activity

    DOE PAGES

    Ching, Joseph; Fast, Jerome; West, Matthew; ...

    2017-06-21

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  17. Metrics to quantify the importance of mixing state for CCN activity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ching, Joseph; Fast, Jerome; West, Matthew

    It is commonly assumed that models are more prone to errors in predicted cloud condensation nuclei (CCN) concentrations when the aerosol populations are externally mixed. In this work we investigate this assumption by using the mixing state index ( χ) proposed by Riemer and West (2013) to quantify the degree of external and internal mixing of aerosol populations. We combine this metric with particle-resolved model simulations to quantify error in CCN predictions when mixing state information is neglected, exploring a range of scenarios that cover different conditions of aerosol aging. We show that mixing state information does indeed become unimportantmore » for more internally mixed populations, more precisely for populations with χ larger than 75 %. For more externally mixed populations ( χ below 20 %) the relationship of χ and the error in CCN predictions is not unique and ranges from lower than -40 % to about 150 %, depending on the underlying aerosol population and the environmental supersaturation. We explain the reasons for this behavior with detailed process analyses.« less

  18. Advanced Performance Modeling with Combined Passive and Active Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dovrolis, Constantine; Sim, Alex

    2015-04-15

    To improve the efficiency of resource utilization and scheduling of scientific data transfers on high-speed networks, the "Advanced Performance Modeling with combined passive and active monitoring" (APM) project investigates and models a general-purpose, reusable and expandable network performance estimation framework. The predictive estimation model and the framework will be helpful in optimizing the performance and utilization of networks as well as sharing resources with predictable performance for scientific collaborations, especially in data intensive applications. Our prediction model utilizes historical network performance information from various network activity logs as well as live streaming measurements from network peering devices. Historical network performancemore » information is used without putting extra load on the resources by active measurement collection. Performance measurements collected by active probing is used judiciously for improving the accuracy of predictions.« less

  19. Quantifying, Analysing and Modeling Rockfall Activity in two Different Alpine Catchments using Terrestrial Laserscanning

    NASA Astrophysics Data System (ADS)

    Haas, F.; Heckmann, T.; Wichmann, V.; Becht, M.

    2011-12-01

    Rockfall processes play a major role as a natural hazard, especially if the rock faces are located close to infrastructure. However these processes cause also the retreat of the steep rock faces by weathering and the growth of the corresponding talus cones by routing debris down the talus cones. That's why this process plays also an important role for the geomorphic system and the sediment budget of high mountain catchments. The presented investigation deals with the use of TLS for quantification and for analysis of rockfall activity in two study areas located in the Alps. The rockfaces of both catchments and the corresponding talus cones were scanned twice a year from different distances. Figure 1 shows an example for the spatial distribution of surface changes at a rockface in the Northern Dolomites between 2008 and 2010. The measured surface changes at this location yields to a mean rockwall retreat of 0.04 cm/a. But high resolution TLS data are not only applicable to quantify rockfall activity they can also be used to characterize the surface properties of the corresponding talus cones and the runout distances of bigger boulders and this can lead to a better process understanding. Therefore the surface roughness of talus cones in both catchments was characterized from the TLS point clouds by a GIS approach. The resulting detailed maps of the surface conditions on the talus cones were used to improve an existing process model which is able to model runout distances on the talus cones using distributed friction parameters. Beside this the investigations showed, that also the shape of the boulders has an influence on the runout distance. That's why the interrelationships between rock fragment morphology and runout distance of over 600 single boulders were analysed at the site of a large rockfall event. The submitted poster will show the results of the quantification of the rockfall activity and additionally it will show the results of the analyses of the talus

  20. Quantifying the bending of bilayer temperature-sensitive hydrogels

    NASA Astrophysics Data System (ADS)

    Dong, Chenling; Chen, Bin

    2017-04-01

    Stimuli-responsive hydrogels can serve as manipulators, including grippers, sensors, etc., where structures can undergo significant bending. Here, a finite-deformation theory is developed to quantify the evolution of the curvature of bilayer temperature-sensitive hydrogels when subjected to a temperature change. Analysis of the theory indicates that there is an optimal thickness ratio to acquire the largest curvature in the bilayer and also suggests that the sign or the magnitude of the curvature can be significantly affected by pre-stretches or small pores in the bilayer. This study may provide important guidelines in fabricating temperature-responsive bilayers with desirable mechanical performance.

  1. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  2. Quantifying chemical uncertainties in simulations of the ISM

    NASA Astrophysics Data System (ADS)

    Glover, Simon

    2018-06-01

    The ever-increasing power of large parallel computers now makes it possible to include increasingly sophisticated chemical models in three-dimensional simulations of the interstellar medium (ISM). This allows us to study the role that chemistry plays in the thermal balance of a realistically-structured, turbulent ISM, as well as enabling us to generated detailed synthetic observations of important atomic or molecular tracers. However, one major constraint on the accuracy of these models is the accuracy with which the input chemical rate coefficients are known. Uncertainties in these chemical rate coefficients inevitably introduce uncertainties into the model predictions. In this talk, I will review some of the methods we can use to quantify these uncertainties and to identify the key reactions where improved chemical data is most urgently required. I will also discuss a few examples, ranging from the local ISM to the high-redshift universe.

  3. Quantifying the propagation of distress and mental disorders in social networks.

    PubMed

    Scatà, Marialisa; Di Stefano, Alessandro; La Corte, Aurelio; Liò, Pietro

    2018-03-22

    Heterogeneity of human beings leads to think and react differently to social phenomena. Awareness and homophily drive people to weigh interactions in social multiplex networks, influencing a potential contagion effect. To quantify the impact of heterogeneity on spreading dynamics, we propose a model of coevolution of social contagion and awareness, through the introduction of statistical estimators, in a weighted multiplex network. Multiplexity of networked individuals may trigger propagation enough to produce effects among vulnerable subjects experiencing distress, mental disorder, which represent some of the strongest predictors of suicidal behaviours. The exposure to suicide is emotionally harmful, since talking about it may give support or inadvertently promote it. To disclose the complex effect of the overlapping awareness on suicidal ideation spreading among disordered people, we also introduce a data-driven approach by integrating different types of data. Our modelling approach unveils the relationship between distress and mental disorders propagation and suicidal ideation spreading, shedding light on the role of awareness in a social network for suicide prevention. The proposed model is able to quantify the impact of overlapping awareness on suicidal ideation spreading and our findings demonstrate that it plays a dual role on contagion, either reinforcing or delaying the contagion outbreak.

  4. Characterization of autoregressive processes using entropic quantifiers

    NASA Astrophysics Data System (ADS)

    Traversaro, Francisco; Redelico, Francisco O.

    2018-01-01

    The aim of the contribution is to introduce a novel information plane, the causal-amplitude informational plane. As previous works seems to indicate, Bandt and Pompe methodology for estimating entropy does not allow to distinguish between probability distributions which could be fundamental for simulation or for probability analysis purposes. Once a time series is identified as stochastic by the causal complexity-entropy informational plane, the novel causal-amplitude gives a deeper understanding of the time series, quantifying both, the autocorrelation strength and the probability distribution of the data extracted from the generating processes. Two examples are presented, one from climate change model and the other from financial markets.

  5. Modeling Ni-Cd performance. Planned alterations to the Goddard battery model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1986-01-01

    The Goddard Space Flight Center (GSFC) currently has a preliminary computer model to simulate a Nickel Cadmium (Ni-Cd) performance. The basic methodology of the model was described in the paper entitled Fundamental Algorithms of the Goddard Battery Model. At present, the model is undergoing alterations to increase its efficiency, accuracy, and generality. A review of the present battery model is given, and the planned charges of the model are described.

  6. Soil Methanotrophy Model (MeMo v1.0): a process-based model to quantify global uptake of atmospheric methane by soil

    NASA Astrophysics Data System (ADS)

    Murguia-Flores, Fabiola; Arndt, Sandra; Ganesan, Anita L.; Murray-Tortarolo, Guillermo; Hornibrook, Edward R. C.

    2018-06-01

    Soil bacteria known as methanotrophs are the sole biological sink for atmospheric methane (CH4), a potent greenhouse gas that is responsible for ˜ 20 % of the human-driven increase in radiative forcing since pre-industrial times. Soil methanotrophy is controlled by a plethora of factors, including temperature, soil texture, moisture and nitrogen content, resulting in spatially and temporally heterogeneous rates of soil methanotrophy. As a consequence, the exact magnitude of the global soil sink, as well as its temporal and spatial variability, remains poorly constrained. We developed a process-based model (Methanotrophy Model; MeMo v1.0) to simulate and quantify the uptake of atmospheric CH4 by soils at the global scale. MeMo builds on previous models by Ridgwell et al. (1999) and Curry (2007) by introducing several advances, including (1) a general analytical solution of the one-dimensional diffusion-reaction equation in porous media, (2) a refined representation of nitrogen inhibition on soil methanotrophy, (3) updated factors governing the influence of soil moisture and temperature on CH4 oxidation rates and (4) the ability to evaluate the impact of autochthonous soil CH4 sources on uptake of atmospheric CH4. We show that the improved structural and parametric representation of key drivers of soil methanotrophy in MeMo results in a better fit to observational data. A global simulation of soil methanotrophy for the period 1990-2009 using MeMo yielded an average annual sink of 33.5 ± 0.6 Tg CH4 yr-1. Warm and semi-arid regions (tropical deciduous forest and open shrubland) had the highest CH4 uptake rates of 602 and 518 mg CH4 m-2 yr-1, respectively. In these regions, favourable annual soil moisture content ( ˜ 20 % saturation) and low seasonal temperature variations (variations < ˜ 6 °C) provided optimal conditions for soil methanotrophy and soil-atmosphere gas exchange. In contrast to previous model analyses, but in agreement with recent observational data

  7. Quantifying 10 years of improved earthquake-monitoring performance in the Caribbean region

    USGS Publications Warehouse

    McNamara, Daniel E.; Hillebrandt-Andrade, Christa; Saurel, Jean-Marie; Huerfano-Moreno, V.; Lynch, Lloyd

    2015-01-01

    Over 75 tsunamis have been documented in the Caribbean and adjacent regions during the past 500 years. Since 1500, at least 4484 people are reported to have perished in these killer waves. Hundreds of thousands are currently threatened along the Caribbean coastlines. Were a great tsunamigenic earthquake to occur in the Caribbean region today, the effects would potentially be catastrophic due to an increasingly vulnerable region that has seen significant population increases in the past 40–50 years and currently hosts an estimated 500,000 daily beach visitors from North America and Europe, a majority of whom are not likely aware of tsunami and earthquake hazards. Following the magnitude 9.1 Sumatra–Andaman Islands earthquake of 26 December 2004, the United Nations Educational, Scientific and Cultural Organization (UNESCO) Intergovernmental Coordination Group (ICG) for the Tsunami and other Coastal Hazards Early Warning System for the Caribbean and Adjacent Regions (CARIBE‐EWS) was established and developed minimum performance standards for the detection and analysis of earthquakes. In this study, we model earthquake‐magnitude detection threshold and P‐wave detection time and demonstrate that the requirements established by the UNESCO ICG CARIBE‐EWS are met with 100% of the network operating. We demonstrate that earthquake‐monitoring performance in the Caribbean Sea region has improved significantly in the past decade as the number of real‐time seismic stations available to the National Oceanic and Atmospheric Administration tsunami warning centers have increased. We also identify weaknesses in the current international network and provide guidance for selecting the optimal distribution of seismic stations contributed from existing real‐time broadband national networks in the region.

  8. Quantifying polypeptide conformational space: sensitivity to conformation and ensemble definition.

    PubMed

    Sullivan, David C; Lim, Carmay

    2006-08-24

    Quantifying the density of conformations over phase space (the conformational distribution) is needed to model important macromolecular processes such as protein folding. In this work, we quantify the conformational distribution for a simple polypeptide (N-mer polyalanine) using the cumulative distribution function (CDF), which gives the probability that two randomly selected conformations are separated by less than a "conformational" distance and whose inverse gives conformation counts as a function of conformational radius. An important finding is that the conformation counts obtained by the CDF inverse depend critically on the assignment of a conformation's distance span and the ensemble (e.g., unfolded state model): varying ensemble and conformation definition (1 --> 2 A) varies the CDF-based conformation counts for Ala(50) from 10(11) to 10(69). In particular, relatively short molecular dynamics (MD) relaxation of Ala(50)'s random-walk ensemble reduces the number of conformers from 10(55) to 10(14) (using a 1 A root-mean-square-deviation radius conformation definition) pointing to potential disconnections in comparing the results from simplified models of unfolded proteins with those from all-atom MD simulations. Explicit waters are found to roughen the landscape considerably. Under some common conformation definitions, the results herein provide (i) an upper limit to the number of accessible conformations that compose unfolded states of proteins, (ii) the optimal clustering radius/conformation radius for counting conformations for a given energy and solvent model, (iii) a means of comparing various studies, and (iv) an assessment of the applicability of random search in protein folding.

  9. Quantitative Rheological Model Selection

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2014-11-01

    The more parameters in a rheological the better it will reproduce available data, though this does not mean that it is necessarily a better justified model. Good fits are only part of model selection. We employ a Bayesian inference approach that quantifies model suitability by balancing closeness to data against both the number of model parameters and their a priori uncertainty. The penalty depends upon prior-to-calibration expectation of the viable range of values that model parameters might take, which we discuss as an essential aspect of the selection criterion. Models that are physically grounded are usually accompanied by tighter physical constraints on their respective parameters. The analysis reflects a basic principle: models grounded in physics can be expected to enjoy greater generality and perform better away from where they are calibrated. In contrast, purely empirical models can provide comparable fits, but the model selection framework penalizes their a priori uncertainty. We demonstrate the approach by selecting the best-justified number of modes in a Multi-mode Maxwell description of PVA-Borax. We also quantify relative merits of the Maxwell model relative to powerlaw fits and purely empirical fits for PVA-Borax, a viscoelastic liquid, and gluten.

  10. The Quantified Self: Fundamental Disruption in Big Data Science and Biological Discovery.

    PubMed

    Swan, Melanie

    2013-06-01

    A key contemporary trend emerging in big data science is the quantified self (QS)-individuals engaged in the self-tracking of any kind of biological, physical, behavioral, or environmental information as n=1 individuals or in groups. There are opportunities for big data scientists to develop new models to support QS data collection, integration, and analysis, and also to lead in defining open-access database resources and privacy standards for how personal data is used. Next-generation QS applications could include tools for rendering QS data meaningful in behavior change, establishing baselines and variability in objective metrics, applying new kinds of pattern recognition techniques, and aggregating multiple self-tracking data streams from wearable electronics, biosensors, mobile phones, genomic data, and cloud-based services. The long-term vision of QS activity is that of a systemic monitoring approach where an individual's continuous personal information climate provides real-time performance optimization suggestions. There are some potential limitations related to QS activity-barriers to widespread adoption and a critique regarding scientific soundness-but these may be overcome. One interesting aspect of QS activity is that it is fundamentally a quantitative and qualitative phenomenon since it includes both the collection of objective metrics data and the subjective experience of the impact of these data. Some of this dynamic is being explored as the quantified self is becoming the qualified self in two new ways: by applying QS methods to the tracking of qualitative phenomena such as mood, and by understanding that QS data collection is just the first step in creating qualitative feedback loops for behavior change. In the long-term future, the quantified self may become additionally transformed into the extended exoself as data quantification and self-tracking enable the development of new sense capabilities that are not possible with ordinary senses. The

  11. Research and development on performance models of thermal imaging systems

    NASA Astrophysics Data System (ADS)

    Wang, Ji-hui; Jin, Wei-qi; Wang, Xia; Cheng, Yi-nan

    2009-07-01

    Traditional ACQUIRE models perform the discrimination tasks of detection (target orientation, recognition and identification) for military target based upon minimum resolvable temperature difference (MRTD) and Johnson criteria for thermal imaging systems (TIS). Johnson criteria is generally pessimistic for performance predict of sampled imager with the development of focal plane array (FPA) detectors and digital image process technology. Triangle orientation discrimination threshold (TOD) model, minimum temperature difference perceived (MTDP)/ thermal range model (TRM3) Model and target task performance (TTP) metric have been developed to predict the performance of sampled imager, especially TTP metric can provides better accuracy than the Johnson criteria. In this paper, the performance models above are described; channel width metrics have been presented to describe the synthesis performance including modulate translate function (MTF) channel width for high signal noise to ration (SNR) optoelectronic imaging systems and MRTD channel width for low SNR TIS; the under resolvable questions for performance assessment of TIS are indicated; last, the development direction of performance models for TIS are discussed.

  12. Measurement-based reliability/performability models

    NASA Technical Reports Server (NTRS)

    Hsueh, Mei-Chen

    1987-01-01

    Measurement-based models based on real error-data collected on a multiprocessor system are described. Model development from the raw error-data to the estimation of cumulative reward is also described. A workload/reliability model is developed based on low-level error and resource usage data collected on an IBM 3081 system during its normal operation in order to evaluate the resource usage/error/recovery process in a large mainframe system. Thus, both normal and erroneous behavior of the system are modeled. The results provide an understanding of the different types of errors and recovery processes. The measured data show that the holding times in key operational and error states are not simple exponentials and that a semi-Markov process is necessary to model the system behavior. A sensitivity analysis is performed to investigate the significance of using a semi-Markov process, as opposed to a Markov process, to model the measured system.

  13. A calibrated Monte Carlo approach to quantify the impacts of misorientation and different driving forces on texture development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liangzhe Zhang; Anthony D. Rollett; Timothy Bartel

    2012-02-01

    A calibrated Monte Carlo (cMC) approach, which quantifies grain boundary kinetics within a generic setting, is presented. The influence of misorientation is captured by adding a scaling coefficient in the spin flipping probability equation, while the contribution of different driving forces is weighted using a partition function. The calibration process relies on the established parametric links between Monte Carlo (MC) and sharp-interface models. The cMC algorithm quantifies microstructural evolution under complex thermomechanical environments and remedies some of the difficulties associated with conventional MC models. After validation, the cMC approach is applied to quantify the texture development of polycrystalline materials withmore » influences of misorientation and inhomogeneous bulk energy across grain boundaries. The results are in good agreement with theory and experiments.« less

  14. Information criteria for quantifying loss of reversibility in parallelized KMC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gourgoulias, Konstantinos, E-mail: gourgoul@math.umass.edu; Katsoulakis, Markos A., E-mail: markos@math.umass.edu; Rey-Bellet, Luc, E-mail: luc@math.umass.edu

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot bemore » computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.« less

  15. Information criteria for quantifying loss of reversibility in parallelized KMC

    NASA Astrophysics Data System (ADS)

    Gourgoulias, Konstantinos; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2017-01-01

    Parallel Kinetic Monte Carlo (KMC) is a potent tool to simulate stochastic particle systems efficiently. However, despite literature on quantifying domain decomposition errors of the particle system for this class of algorithms in the short and in the long time regime, no study yet explores and quantifies the loss of time-reversibility in Parallel KMC. Inspired by concepts from non-equilibrium statistical mechanics, we propose the entropy production per unit time, or entropy production rate, given in terms of an observable and a corresponding estimator, as a metric that quantifies the loss of reversibility. Typically, this is a quantity that cannot be computed explicitly for Parallel KMC, which is why we develop a posteriori estimators that have good scaling properties with respect to the size of the system. Through these estimators, we can connect the different parameters of the scheme, such as the communication time step of the parallelization, the choice of the domain decomposition, and the computational schedule, with its performance in controlling the loss of reversibility. From this point of view, the entropy production rate can be seen both as an information criterion to compare the reversibility of different parallel schemes and as a tool to diagnose reversibility issues with a particular scheme. As a demonstration, we use Sandia Lab's SPPARKS software to compare different parallelization schemes and different domain (lattice) decompositions.

  16. Models for evaluating the performability of degradable computing systems

    NASA Technical Reports Server (NTRS)

    Wu, L. T.

    1982-01-01

    Recent advances in multiprocessor technology established the need for unified methods to evaluate computing systems performance and reliability. In response to this modeling need, a general modeling framework that permits the modeling, analysis and evaluation of degradable computing systems is considered. Within this framework, several user oriented performance variables are identified and shown to be proper generalizations of the traditional notions of system performance and reliability. Furthermore, a time varying version of the model is developed to generalize the traditional fault tree reliability evaluation methods of phased missions.

  17. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  18. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  19. Do bioclimate variables improve performance of climate envelope models?

    USGS Publications Warehouse

    Watling, James I.; Romañach, Stephanie S.; Bucklin, David N.; Speroterra, Carolina; Brandt, Laura A.; Pearlstine, Leonard G.; Mazzotti, Frank J.

    2012-01-01

    Climate envelope models are widely used to forecast potential effects of climate change on species distributions. A key issue in climate envelope modeling is the selection of predictor variables that most directly influence species. To determine whether model performance and spatial predictions were related to the selection of predictor variables, we compared models using bioclimate variables with models constructed from monthly climate data for twelve terrestrial vertebrate species in the southeastern USA using two different algorithms (random forests or generalized linear models), and two model selection techniques (using uncorrelated predictors or a subset of user-defined biologically relevant predictor variables). There were no differences in performance between models created with bioclimate or monthly variables, but one metric of model performance was significantly greater using the random forest algorithm compared with generalized linear models. Spatial predictions between maps using bioclimate and monthly variables were very consistent using the random forest algorithm with uncorrelated predictors, whereas we observed greater variability in predictions using generalized linear models.

  20. A combined monitoring and modeling approach to quantify water and nitrate leaching using effective soil column hydraulic properties

    NASA Astrophysics Data System (ADS)

    Couvreur, V.; Kandelous, M. M.; Moradi, A. B.; Baram, S.; Mairesse, H.; Hopmans, J. W.

    2014-12-01

    There is a worldwide growing concern for agricultural lands input to groundwater pollution. Nitrate contamination of groundwater across the Central Valley of California has been related to its diverse and intensive agricultural practices. However, there has been no study comparing leaching of nitrate in each individual agricultural land within the complex and diversely managed studied area. A combined field monitoring and modeling approach was developed to quantify from simple measurements the leaching of water and nitrate below the root zone. The monitored state variables are soil water content at several depths within the root zone, soil matric potential at two depths below the root zone, and nitrate concentration in the soil solution. In the modeling part, unsaturated water flow and solute transport are simulated with the software HYDRUS in a soil profile fragmented in up to two soil hydraulic types, whose effective hydraulic properties are optimized with an inverse modeling method. The applicability of the method will first be demonstrated "in-silico", with synthetic soil water dynamics data generated with HYDRUS, and considering the soil column as the layering of several soil types characterized in-situ. The method will then be applied to actual soil water status data from various crops in California including tomato, citrus, almond, pistachio, and walnut. Eventually, improvements of irrigation and fertilization management practices (i.e. mainly questions of quantity and frequency of application minimizing leaching under constraint of water and nutrient availability) will be investigated using coupled modeling and optimization tools.

  1. A fuzzy Bayesian network approach to quantify the human behaviour during an evacuation

    NASA Astrophysics Data System (ADS)

    Ramli, Nurulhuda; Ghani, Noraida Abdul; Ahmad, Nazihah

    2016-06-01

    Bayesian Network (BN) has been regarded as a successful representation of inter-relationship of factors affecting human behavior during an emergency. This paper is an extension of earlier work of quantifying the variables involved in the BN model of human behavior during an evacuation using a well-known direct probability elicitation technique. To overcome judgment bias and reduce the expert's burden in providing precise probability values, a new approach for the elicitation technique is required. This study proposes a new fuzzy BN approach for quantifying human behavior during an evacuation. Three major phases of methodology are involved, namely 1) development of qualitative model representing human factors during an evacuation, 2) quantification of BN model using fuzzy probability and 3) inferencing and interpreting the BN result. A case study of three inter-dependencies of human evacuation factors such as danger assessment ability, information about the threat and stressful conditions are used to illustrate the application of the proposed method. This approach will serve as an alternative to the conventional probability elicitation technique in understanding the human behavior during an evacuation.

  2. Quantifying quantum coherence with quantum Fisher information.

    PubMed

    Feng, X N; Wei, L F

    2017-11-14

    Quantum coherence is one of the old but always important concepts in quantum mechanics, and now it has been regarded as a necessary resource for quantum information processing and quantum metrology. However, the question of how to quantify the quantum coherence has just been paid the attention recently (see, e.g., Baumgratz et al. PRL, 113. 140401 (2014)). In this paper we verify that the well-known quantum Fisher information (QFI) can be utilized to quantify the quantum coherence, as it satisfies the monotonicity under the typical incoherent operations and the convexity under the mixing of the quantum states. Differing from most of the pure axiomatic methods, quantifying quantum coherence by QFI could be experimentally testable, as the bound of the QFI is practically measurable. The validity of our proposal is specifically demonstrated with the typical phase-damping and depolarizing evolution processes of a generic single-qubit state, and also by comparing it with the other quantifying methods proposed previously.

  3. KU-Band rendezvous radar performance computer simulation model

    NASA Technical Reports Server (NTRS)

    Griffin, J. W.

    1980-01-01

    The preparation of a real time computer simulation model of the KU band rendezvous radar to be integrated into the shuttle mission simulator (SMS), the shuttle engineering simulator (SES), and the shuttle avionics integration laboratory (SAIL) simulator is described. To meet crew training requirements a radar tracking performance model, and a target modeling method were developed. The parent simulation/radar simulation interface requirements, and the method selected to model target scattering properties, including an application of this method to the SPAS spacecraft are described. The radar search and acquisition mode performance model and the radar track mode signal processor model are examined and analyzed. The angle, angle rate, range, and range rate tracking loops are also discussed.

  4. Quantifying construction and demolition waste: an analytical review.

    PubMed

    Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen

    2014-09-01

    Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Quantifying the effects of pesticide exposure on annual reproductive success of birds (presentation)

    EPA Science Inventory

    The Markov chain nest productivity model (MCnest) was developed for quantifying the effects of specific pesticide‐use scenarios on the annual reproductive success of simulated populations of birds. Each nesting attempt is divided into a series of discrete phases (e.g., egg ...

  6. Performance Models for Split-execution Computing Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humble, Travis S; McCaskey, Alex; Schrock, Jonathan

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardwaremore » limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.« less

  7. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.

    2011-03-01

    are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less

  8. Dispersive Raman spectroscopy excited at 1064nm to classify the botanic origin of honeys from Calabria and quantify the sugar profile

    NASA Astrophysics Data System (ADS)

    Mignani, A. G.; Ciaccheri, L.; Mencaglia, A. A.; Di Sanzo, R.; Carabetta, S.; Russo, M. T.

    2005-05-01

    Raman spectroscopy performed using optical fibers, with excitation at 1064 nm and a dispersive detection scheme, was utilized to analyze a selection of unifloral honeys produced in the Italian region of Calabria. The honey samples had three different botanical origins: chestnut, citrus, and acacia, respectively. A multivariate processing of the spectroscopic data enabled us to distinguish their botanical origin, and to build predictive models for quantifying their main sugars. This experiment indicates the excellent potentials of Raman spectroscopy as an analytical tool for the nondestructive and rapid assessment of food-quality indicators.

  9. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows…

  10. Theoretical performance model for single image depth from defocus.

    PubMed

    Trouvé-Peloux, Pauline; Champagnat, Frédéric; Le Besnerais, Guy; Idier, Jérôme

    2014-12-01

    In this paper we present a performance model for depth estimation using single image depth from defocus (SIDFD). Our model is based on an original expression of the Cramér-Rao bound (CRB) in this context. We show that this model is consistent with the expected behavior of SIDFD. We then study the influence on the performance of the optical parameters of a conventional camera such as the focal length, the aperture, and the position of the in-focus plane (IFP). We derive an approximate analytical expression of the CRB away from the IFP, and we propose an interpretation of the SIDFD performance in this domain. Finally, we illustrate the predictive capacity of our performance model on experimental data comparing several settings of a consumer camera.

  11. Quantifying shape changes of silicone breast implants in a murine model using in vivo micro-CT.

    PubMed

    Anderson, Emily E; Perilli, Egon; Carati, Colin J; Reynolds, Karen J

    2017-08-01

    A major complication of silicone breast implants is the formation of a capsule around the implant known as capsular contracture which results in the distortion of the implant. Recently, a mouse model for studying capsular contracture was examined using micro-computed tomography (micro-CT), however, only qualitative changes were reported. The aim of this study was to develop a quantitative method for comparing the shape changes of silicone implants using in vivo micro-CT. Mice were bilaterally implanted with silicone implants and underwent ionizing radiation to induce capsular contracture. On day 28 post-surgery mice were examined in vivo using micro-CT. The reconstructed cross-section images were visually inspected to identify distortion. Measurements were taken in 2D and 3D to quantify the shape of the implants in the normal (n = 11) and distorted (n = 5) groups. The degree of anisotropy was significantly higher in the distorted implants in the transaxial view (0.99 vs. 1.19, p = 0.002) and the y-axis lengths were significantly shorter in the sagittal (9.27 mm vs. 8.55 mm, p = 0.015) and coronal (9.24 mm vs. 8.76 mm, p = 0.031) views, indicating a deviation from the circular cross-section and shortening of the long axis. The 3D analysis revealed a significantly lower average thickness (sphere-fitting method) in distorted implants (6.86 mm vs. 5.49 mm, p = 0.002), whereas the volume and surface area did not show significant changes. Statistically significant differences between normal and distorted implants were found in 2D and 3D using distance measurements performed via micro-CT. This objective analysis method can be useful for a range of studies involving deformable implants using in vivo micro-CT. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 1447-1452, 2017. © 2016 Wiley Periodicals, Inc.

  12. Quantifying the Role of Atmospheric Forcing in Ice Edge Retreat and Advance Including Wind-Wave Coupling

    DTIC Science & Technology

    2015-09-30

    Quantifying the Role of Atmospheric Forcing in Ice Edge Retreat and Advance Including Wind- Wave Coupling Peter S. Guest (NPS Technical Contact) Naval...surface fluxes and ocean waves in coupled models in the Beaufort and Chukchi Seas. 2. Understand the physics of heat and mass transfer from the ocean...to the atmosphere. 3. Improve forecasting of waves on the open ocean and in the marginal ice zone. 2 OBJECTIVES 1. Quantifying the open-ocean

  13. Performance of GeantV EM Physics Models

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2017-10-01

    The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.

  14. Modeling of video compression effects on target acquisition performance

    NASA Astrophysics Data System (ADS)

    Cha, Jae H.; Preece, Bradley; Espinola, Richard L.

    2009-05-01

    The effect of video compression on image quality was investigated from the perspective of target acquisition performance modeling. Human perception tests were conducted recently at the U.S. Army RDECOM CERDEC NVESD, measuring identification (ID) performance on simulated military vehicle targets at various ranges. These videos were compressed with different quality and/or quantization levels utilizing motion JPEG, motion JPEG2000, and MPEG-4 encoding. To model the degradation on task performance, the loss in image quality is fit to an equivalent Gaussian MTF scaled by the Structural Similarity Image Metric (SSIM). Residual compression artifacts are treated as 3-D spatio-temporal noise. This 3-D noise is found by taking the difference of the uncompressed frame, with the estimated equivalent blur applied, and the corresponding compressed frame. Results show good agreement between the experimental data and the model prediction. This method has led to a predictive performance model for video compression by correlating various compression levels to particular blur and noise input parameters for NVESD target acquisition performance model suite.

  15. Charge-coupled-device X-ray detector performance model

    NASA Technical Reports Server (NTRS)

    Bautz, M. W.; Berman, G. E.; Doty, J. P.; Ricker, G. R.

    1987-01-01

    A model that predicts the performance characteristics of CCD detectors being developed for use in X-ray imaging is presented. The model accounts for the interactions of both X-rays and charged particles with the CCD and simulates the transport and loss of charge in the detector. Predicted performance parameters include detective and net quantum efficiencies, split-event probability, and a parameter characterizing the effective thickness presented by the detector to cosmic-ray protons. The predicted performance of two CCDs of different epitaxial layer thicknesses is compared. The model predicts that in each device incomplete recovery of the charge liberated by a photon of energy between 0.1 and 10 keV is very likely to be accompanied by charge splitting between adjacent pixels. The implications of the model predictions for CCD data processing algorithms are briefly discussed.

  16. Quantifying fossil fuel CO2 from continuous measurements of APO: a novel approach

    NASA Astrophysics Data System (ADS)

    Pickers, Penelope; Manning, Andrew C.; Forster, Grant L.; van der Laan, Sander; Wilson, Phil A.; Wenger, Angelina; Meijer, Harro A. J.; Oram, David E.; Sturges, William T.

    2016-04-01

    Using atmospheric measurements to accurately quantify CO2 emissions from fossil fuel sources requires the separation of biospheric and anthropogenic CO2 fluxes. The ability to quantify the fossil fuel component of CO2 (ffCO2) from atmospheric measurements enables more accurate 'top-down' verification of CO2 emissions inventories, which frequently have large uncertainty. Typically, ffCO2 is quantified (in ppm units) from discrete atmospheric measurements of Δ14CO2, combined with higher resolution atmospheric CO measurements, and with knowledge of CO:ffCO2 ratios. In the United Kingdom (UK), however, measurements of Δ14CO2 are often significantly biased by nuclear power plant influences, which limit the use of this approach. We present a novel approach for quantifying ffCO2 using measurements of APO (Atmospheric Potential Oxygen; a tracer derived from concurrent measurements of CO2 and O2) from two measurement sites in Norfolk, UK. Our approach is similar to that used for quantifying ffCO2 from CO measurements (ffCO2(CO)), whereby ffCO2(APO) = (APOmeas - APObg)/RAPO, where (APOmeas - APObg) is the APO deviation from the background, and RAPO is the APO:CO2 combustion ratio for fossil fuel. Time varying values of RAPO are calculated from the global gridded COFFEE (CO2 release and Oxygen uptake from Fossil Fuel Emission Estimate) dataset, combined with NAME (Numerical Atmospheric-dispersion Modelling Environment) transport model footprints. We compare our ffCO2(APO) results to results obtained using the ffCO2(CO) method, using CO:CO2 fossil fuel emission ratios (RCO) from the EDGAR (Emission Database for Global Atmospheric Research) database. We find that the APO ffCO2 quantification method is more precise than the CO method, owing primarily to a smaller range of possible APO:CO2 fossil fuel emission ratios, compared to the CO:CO2 emission ratio range. Using a long-term dataset of atmospheric O2, CO2, CO and Δ14CO2 from Lutjewad, The Netherlands, we examine the

  17. Proof-of-Concept Demonstrations for Computation-Based Human Reliability Analysis. Modeling Operator Performance During Flooding Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joe, Jeffrey Clark; Boring, Ronald Laurids; Herberger, Sarah Elizabeth Marie

    The United States (U.S.) Department of Energy (DOE) Light Water Reactor Sustainability (LWRS) program has the overall objective to help sustain the existing commercial nuclear power plants (NPPs). To accomplish this program objective, there are multiple LWRS “pathways,” or research and development (R&D) focus areas. One LWRS focus area is called the Risk-Informed Safety Margin and Characterization (RISMC) pathway. Initial efforts under this pathway to combine probabilistic and plant multi-physics models to quantify safety margins and support business decisions also included HRA, but in a somewhat simplified manner. HRA experts at Idaho National Laboratory (INL) have been collaborating with othermore » experts to develop a computational HRA approach, called the Human Unimodel for Nuclear Technology to Enhance Reliability (HUNTER), for inclusion into the RISMC framework. The basic premise of this research is to leverage applicable computational techniques, namely simulation and modeling, to develop and then, using RAVEN as a controller, seamlessly integrate virtual operator models (HUNTER) with 1) the dynamic computational MOOSE runtime environment that includes a full-scope plant model, and 2) the RISMC framework PRA models already in use. The HUNTER computational HRA approach is a hybrid approach that leverages past work from cognitive psychology, human performance modeling, and HRA, but it is also a significant departure from existing static and even dynamic HRA methods. This report is divided into five chapters that cover the development of an external flooding event test case and associated statistical modeling considerations.« less

  18. A Spectral Evaluation of Models Performances in Mediterranean Oak Woodlands

    NASA Astrophysics Data System (ADS)

    Vargas, R.; Baldocchi, D. D.; Abramowitz, G.; Carrara, A.; Correia, A.; Kobayashi, H.; Papale, D.; Pearson, D.; Pereira, J.; Piao, S.; Rambal, S.; Sonnentag, O.

    2009-12-01

    Ecosystem processes are influenced by climatic trends at multiple temporal scales including diel patterns and other mid-term climatic modes, such as interannual and seasonal variability. Because interactions between biophysical components of ecosystem processes are complex, it is important to test how models perform in frequency (e.g. hours, days, weeks, months, years) and time (i.e. day of the year) domains in addition to traditional tests of annual or monthly sums. Here we present a spectral evaluation using wavelet time series analysis of model performance in seven Mediterranean Oak Woodlands that encompass three deciduous and four evergreen sites. We tested the performance of five models (CABLE, ORCHIDEE, BEPS, Biome-BGC, and JULES) on measured variables of gross primary production (GPP) and evapotranspiration (ET). In general, model performance fails at intermediate periods (e.g. weeks to months) likely because these models do not represent the water pulse dynamics that influence GPP and ET at these Mediterranean systems. To improve the performance of a model it is critical to identify first where and when the model fails. Only by identifying where a model fails we can improve the model performance and use them as prognostic tools and to generate further hypotheses that can be tested by new experiments and measurements.

  19. Quantifying uncertainty in partially specified biological models: how can optimal control theory help us?

    PubMed

    Adamson, M W; Morozov, A Y; Kuzenkov, O A

    2016-09-01

    Mathematical models in biology are highly simplified representations of a complex underlying reality and there is always a high degree of uncertainty with regards to model function specification. This uncertainty becomes critical for models in which the use of different functions fitting the same dataset can yield substantially different predictions-a property known as structural sensitivity. Thus, even if the model is purely deterministic, then the uncertainty in the model functions carries through into uncertainty in model predictions, and new frameworks are required to tackle this fundamental problem. Here, we consider a framework that uses partially specified models in which some functions are not represented by a specific form. The main idea is to project infinite dimensional function space into a low-dimensional space taking into account biological constraints. The key question of how to carry out this projection has so far remained a serious mathematical challenge and hindered the use of partially specified models. Here, we propose and demonstrate a potentially powerful technique to perform such a projection by using optimal control theory to construct functions with the specified global properties. This approach opens up the prospect of a flexible and easy to use method to fulfil uncertainty analysis of biological models.

  20. MorphoGraphX: A platform for quantifying morphogenesis in 4D

    PubMed Central

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne HK; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-01-01

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX (www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth. DOI: http://dx.doi.org/10.7554/eLife.05864.001 PMID:25946108