Empirical agreement in model validation.
Jebeile, Julie; Barberousse, Anouk
2016-04-01
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Validating Computational Human Behavior Models: Consistency and Accuracy Issues
2004-06-01
includes a discussion of SME demographics, content, and organization of the datasets . This research generalizes data from two pilot studies and two base...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject...meet requirements for validating the varied and complex behavioral models. Through a series of empirical studies , this research identifies subject
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
DOT National Transportation Integrated Search
2014-11-01
The main objective of Part 3 was to locally calibrate and validate the mechanistic-empirical pavement : design guide (Pavement-ME) performance models to Michigan conditions. The local calibration of the : performance models in the Pavement-ME is a ch...
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
From Positive Youth Development to Youth's Engagement: The Dream Teens
ERIC Educational Resources Information Center
Gaspar de Matos, Margarida; Simões, Celeste
2016-01-01
In addition to the empirical validation of "health and happiness" determinants, theoretical models suggesting where to ground actions are necessary. In the beginning of the twentieth century, intervention models focused on evaluation and empirical validation were only concerned about overt behaviours (verbal and non-verbal) and covert…
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Empirical validation of an agent-based model of wood markets in Switzerland
Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver
2018-01-01
We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300
Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.
Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.
2016-01-01
The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991
Empirical Refinements of a Molecular Genetics Learning Progression: The Molecular Constructs
ERIC Educational Resources Information Center
Todd, Amber; Kenyon, Lisa
2016-01-01
This article describes revisions to four of the eight constructs of the Duncan molecular genetics learning progression [Duncan, Rogat, & Yarden, (2009)]. As learning progressions remain hypothetical models until validated by multiple rounds of empirical studies, these revisions are an important step toward validating the progression. Our…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boero, Riccardo; Edwards, Brian Keith
Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying themore » event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.« less
Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation
NASA Technical Reports Server (NTRS)
He, Yuning; Lee, Herbert K. H.; Davies, Misty D.
2012-01-01
Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
DOT National Transportation Integrated Search
2017-02-08
The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...
Space evolution model and empirical analysis of an urban public transport network
NASA Astrophysics Data System (ADS)
Sui, Yi; Shao, Feng-jing; Sun, Ren-cheng; Li, Shu-jing
2012-07-01
This study explores the space evolution of an urban public transport network, using empirical evidence and a simulation model validated on that data. Public transport patterns primarily depend on traffic spatial-distribution, demands of passengers and expected utility of investors. Evolution is an iterative process of satisfying the needs of passengers and investors based on a given traffic spatial-distribution. The temporal change of urban public transport network is evaluated both using topological measures and spatial ones. The simulation model is validated using empirical data from nine big cities in China. Statistical analyses on topological and spatial attributes suggest that an evolution network with traffic demands characterized by power-law numerical values which distribute in a mode of concentric circles tallies well with these nine cities.
Hindcasting of Equatorial Spread F Using Seasonal Empirical Models
NASA Astrophysics Data System (ADS)
Aswathy, R. P.; Manju, G.
2018-02-01
The role of gravity waves in modulating equatorial spread F (ESF) day-to-day variability is investigated using ionosonde data at Trivandrum (geographic coordinates, 8.5°N, 77°E; mean geomagnetic latitude -0.3°N) a magnetic equatorial location. A novel empirical model that incorporates the combined effects of electrodynamics and gravity waves in modulating ESF occurrence during autumnal equinox season was presented by Aswathy and Manju (2017). In the present study, the height variations of the requisite gravity wave seed perturbations for ESF are examined for the vernal equinoxes, summer solstices, and winter solstices of different years. Subsequently, the empirical model, incorporating the electrodynamical effects and the gravity wave modulation, valid for each of the seasons is developed. Accordingly, for each season, the threshold curve may be demarcated provided the solar flux index (F10.7) is known. The empirical models are validated using the data for high, moderate, and low solar activity years corresponding to each season. In the next stage, this model is to be fine tuned to facilitate the prediction of ESF well before its onset.
Validity of empirical models of exposure in asphalt paving
Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H
2002-01-01
Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Paul A.; Liao, Chang-hsien
2007-11-15
A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less
Draft user's guide for UDOT mechanistic-empirical pavement design.
DOT National Transportation Integrated Search
2009-10-01
Validation of the new AASHTO Mechanistic-Empirical Pavement Design Guides (MEPDG) nationally calibrated pavement distress and smoothness prediction models when applied under Utah conditions, and local calibration of the new hot-mix asphalt (HMA) p...
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context.
Martinez, Josue G; Carroll, Raymond J; Müller, Samuel; Sampson, Joshua N; Chatterjee, Nilanjan
2011-11-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso.
A Conceptual Model of Career Development to Enhance Academic Motivation
ERIC Educational Resources Information Center
Collins, Nancy Creighton
2010-01-01
The purpose of this study was to develop, refine, and validate a conceptual model of career development to enhance the academic motivation of community college students. To achieve this end, a straw model was built from the theoretical and empirical research literature. The model was then refined and validated through three rounds of a Delphi…
Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky
2012-01-01
We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.
Validation of pavement performance curves for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical : Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accu...
Quantification of Neutral Wind Variability in the Upper Thermosphere
NASA Technical Reports Server (NTRS)
Richards, Philip G.
2000-01-01
The overall objective of this grant was to: 1) Quantify thermospheric neutral wind behavior in the ionosphere. This was to be achieved by developing an improved empirical wind model. 2) Validating the procedure for obtaining winds from the height of the peak density. 3) Improving the model capabilities and making updated versions of the model available to other scientists. The approach is to use neutral winds derived from ionosonde measurements of the height of the peak electron density (h(sub m)F(sub 2)). One of the proposed first year tasks was to perform some validation studies on the method. Substantial progress has been made with regard to both the empirical model and the validation study. Funding from this grant has also enabled a number of fruitful collaborations with other researchers; one of the stated aims in the proposal. Graduate student Mayra Martinez has developed the mathematical formulation for the empirical wind model as part of her dissertation. As proposed, authors continued validation studies of the technique for determining winds from h(sub m)F(sub 2). They are submitted a paper to the Journal of Geophysical Research in December 1996 entitled "Therinospheric neutral winds at southern mid-latitudes: comparison of optical and ionosonde h(sub m)F(sub 2) methods. A second paper entitled "Ionospheric behavior at a southern mid-latitude in March 1995" has come out of the March 1995 data set and was published in The Journal of Geophysical Research. A new algorithm was developed. The ionosphere also have been modeled.
Model improvements and validation of TerraSAR-X precise orbit determination
NASA Astrophysics Data System (ADS)
Hackel, S.; Montenbruck, O.; Steigenberger, P.; Balss, U.; Gisinger, C.; Eineder, M.
2017-05-01
The radar imaging satellite mission TerraSAR-X requires precisely determined satellite orbits for validating geodetic remote sensing techniques. Since the achieved quality of the operationally derived, reduced-dynamic (RD) orbit solutions limits the capabilities of the synthetic aperture radar (SAR) validation, an effort is made to improve the estimated orbit solutions. This paper discusses the benefits of refined dynamical models on orbit accuracy as well as estimated empirical accelerations and compares different dynamic models in a RD orbit determination. Modeling aspects discussed in the paper include the use of a macro-model for drag and radiation pressure computation, the use of high-quality atmospheric density and wind models as well as the benefit of high-fidelity gravity and ocean tide models. The Sun-synchronous dusk-dawn orbit geometry of TerraSAR-X results in a particular high correlation of solar radiation pressure modeling and estimated normal-direction positions. Furthermore, this mission offers a unique suite of independent sensors for orbit validation. Several parameters serve as quality indicators for the estimated satellite orbit solutions. These include the magnitude of the estimated empirical accelerations, satellite laser ranging (SLR) residuals, and SLR-based orbit corrections. Moreover, the radargrammetric distance measurements of the SAR instrument are selected for assessing the quality of the orbit solutions and compared to the SLR analysis. The use of high-fidelity satellite dynamics models in the RD approach is shown to clearly improve the orbit quality compared to simplified models and loosely constrained empirical accelerations. The estimated empirical accelerations are substantially reduced by 30% in tangential direction when working with the refined dynamical models. Likewise the SLR residuals are reduced from -3 ± 17 to 2 ± 13 mm, and the SLR-derived normal-direction position corrections are reduced from 15 to 6 mm, obtained from the 2012-2014 period. The radar range bias is reduced from -10.3 to -6.1 mm with the updated orbit solutions, which coincides with the reduced standard deviation of the SLR residuals. The improvements are mainly driven by the satellite macro-model for the purpose of solar radiation pressure modeling, improved atmospheric density models, and the use of state-of-the-art gravity field models.
Multisample cross-validation of a model of childhood posttraumatic stress disorder symptomatology.
Anthony, Jason L; Lonigan, Christopher J; Vernberg, Eric M; Greca, Annette M La; Silverman, Wendy K; Prinstein, Mitchell J
2005-12-01
This study is the latest advancement of our research aimed at best characterizing children's posttraumatic stress reactions. In a previous study, we compared existing nosologic and empirical models of PTSD dimensionality and determined the superior model was a hierarchical one with three symptom clusters (Intrusion/Active Avoidance, Numbing/Passive Avoidance, and Arousal; Anthony, Lonigan, & Hecht, 1999). In this study, we cross-validate this model in two populations. Participants were 396 fifth graders who were exposed to either Hurricane Andrew or Hurricane Hugo. Multisample confirmatory factor analysis demonstrated the model's factorial invariance across populations who experienced traumatic events that differed in severity. These results show the model's robustness to characterize children's posttraumatic stress reactions. Implications for diagnosis, classification criteria, and an empirically supported theory of PTSD are discussed.
NASA Astrophysics Data System (ADS)
Gholizadeh, H.; Robeson, S. M.
2015-12-01
Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.
Empirical flow parameters - a tool for hydraulic model validity assessment : [summary].
DOT National Transportation Integrated Search
2013-10-01
Hydraulic modeling assembles models based on generalizations of parameter values from textbooks, professional literature, computer program documentation, and engineering experience. Actual measurements adjacent to the model location are seldom availa...
"La Clave Profesional": Validation of a Vocational Guidance Instrument
ERIC Educational Resources Information Center
Mudarra, Maria J.; Lázaro Martínez, Ángel
2014-01-01
Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…
Entrepreneurial propensity in health care: models and propositions for empirical research.
Asoh, Derek A; Rivers, Patrick A; McCleary, Karl J; Sarvela, Paul
2005-01-01
We maintain that entrepreneurial propensity is a focal construct in entrepreneurial research. We synthesize the literature to develop models depicting the antecedents and consequents of entrepreneurial propensity in a network of other constructs and variables of interest in the health care industry. We advance propositions for empirical investigation and validation of competing research models associated with entrepreneurial propensity. We conclude with a discussion of directions of future research.
NASA Technical Reports Server (NTRS)
Bond, Barbara J.; Peterson, David L.
1999-01-01
This project was a collaborative effort by researchers at ARC, OSU and the University of Arizona. The goal was to use a dataset obtained from a previous study to "empirically validate a new canopy radiative-transfer model (SART) which incorporates a recently-developed leaf-level model (LEAFMOD)". The document includes a short research summary.
Empirical Performance of Cross-Validation With Oracle Methods in a Genomics Context
Martinez, Josue G.; Carroll, Raymond J.; Müller, Samuel; Sampson, Joshua N.; Chatterjee, Nilanjan
2012-01-01
When employing model selection methods with oracle properties such as the smoothly clipped absolute deviation (SCAD) and the Adaptive Lasso, it is typical to estimate the smoothing parameter by m-fold cross-validation, for example, m = 10. In problems where the true regression function is sparse and the signals large, such cross-validation typically works well. However, in regression modeling of genomic studies involving Single Nucleotide Polymorphisms (SNP), the true regression functions, while thought to be sparse, do not have large signals. We demonstrate empirically that in such problems, the number of selected variables using SCAD and the Adaptive Lasso, with 10-fold cross-validation, is a random variable that has considerable and surprising variation. Similar remarks apply to non-oracle methods such as the Lasso. Our study strongly questions the suitability of performing only a single run of m-fold cross-validation with any oracle method, and not just the SCAD and Adaptive Lasso. PMID:22347720
Modeling and validating the cost and clinical pathway of colorectal cancer.
Joranger, Paal; Nesbakken, Arild; Hoff, Geir; Sorbye, Halfdan; Oshaug, Arne; Aas, Eline
2015-02-01
Cancer is a major cause of morbidity and mortality, and colorectal cancer (CRC) is the third most common cancer in the world. The estimated costs of CRC treatment vary considerably, and if CRC costs in a model are based on empirically estimated total costs of stage I, II, III, or IV treatments, then they lack some flexibility to capture future changes in CRC treatment. The purpose was 1) to describe how to model CRC costs and survival and 2) to validate the model in a transparent and reproducible way. We applied a semi-Markov model with 70 health states and tracked age and time since specific health states (using tunnels and 3-dimensional data matrix). The model parameters are based on an observational study at Oslo University Hospital (2049 CRC patients), the National Patient Register, literature, and expert opinion. The target population was patients diagnosed with CRC. The model followed the patients diagnosed with CRC from the age of 70 until death or 100 years. The study focused on the perspective of health care payers. The model was validated for face validity, internal and external validity, and cross-validity. The validation showed a satisfactory match with other models and empirical estimates for both cost and survival time, without any preceding calibration of the model. The model can be used to 1) address a range of CRC-related themes (general model) like survival and evaluation of the cost of treatment and prevention measures; 2) make predictions from intermediate to final outcomes; 3) estimate changes in resource use and costs due to changing guidelines; and 4) adjust for future changes in treatment and trends over time. The model is adaptable to other populations. © The Author(s) 2014.
A Perspective on Computational Human Performance Models as Design Tools
NASA Technical Reports Server (NTRS)
Jones, Patricia M.
2010-01-01
The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.
Empirically Exploring Higher Education Cultures of Assessment
ERIC Educational Resources Information Center
Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.
2016-01-01
Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…
López, Diego M; Blobel, Bernd; Gonzalez, Carolina
2010-01-01
Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.
Flood loss modelling with FLF-IT: a new flood loss function for Italian residential structures
NASA Astrophysics Data System (ADS)
Hasanzadeh Nafari, Roozbeh; Amadio, Mattia; Ngo, Tuan; Mysiak, Jaroslav
2017-07-01
The damage triggered by different flood events costs the Italian economy millions of euros each year. This cost is likely to increase in the future due to climate variability and economic development. In order to avoid or reduce such significant financial losses, risk management requires tools which can provide a reliable estimate of potential flood impacts across the country. Flood loss functions are an internationally accepted method for estimating physical flood damage in urban areas. In this study, we derived a new flood loss function for Italian residential structures (FLF-IT), on the basis of empirical damage data collected from a recent flood event in the region of Emilia-Romagna. The function was developed based on a new Australian approach (FLFA), which represents the confidence limits that exist around the parameterized functional depth-damage relationship. After model calibration, the performance of the model was validated for the prediction of loss ratios and absolute damage values. It was also contrasted with an uncalibrated relative model with frequent usage in Europe. In this regard, a three-fold cross-validation procedure was carried out over the empirical sample to measure the range of uncertainty from the actual damage data. The predictive capability has also been studied for some sub-classes of water depth. The validation procedure shows that the newly derived function performs well (no bias and only 10 % mean absolute error), especially when the water depth is high. Results of these validation tests illustrate the importance of model calibration. The advantages of the FLF-IT model over other Italian models include calibration with empirical data, consideration of the epistemic uncertainty of data, and the ability to change parameters based on building practices across Italy.
Modeling the risk of water pollution by pesticides from imbalanced data.
Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko
2018-04-30
The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
2011-01-01
Background Simulation models of influenza spread play an important role for pandemic preparedness. However, as the world has not faced a severe pandemic for decades, except the rather mild H1N1 one in 2009, pandemic influenza models are inherently hypothetical and validation is, thus, difficult. We aim at reconstructing a recent seasonal influenza epidemic that occurred in Switzerland and deem this to be a promising validation strategy for models of influenza spread. Methods We present a spatially explicit, individual-based simulation model of influenza spread. The simulation model bases upon (i) simulated human travel data, (ii) data on human contact patterns and (iii) empirical knowledge on the epidemiology of influenza. For model validation we compare the simulation outcomes with empirical knowledge regarding (i) the shape of the epidemic curve, overall infection rate and reproduction number, (ii) age-dependent infection rates and time of infection, (iii) spatial patterns. Results The simulation model is capable of reproducing the shape of the 2003/2004 H3N2 epidemic curve of Switzerland and generates an overall infection rate (14.9 percent) and reproduction numbers (between 1.2 and 1.3), which are realistic for seasonal influenza epidemics. Age and spatial patterns observed in empirical data are also reflected by the model: Highest infection rates are in children between 5 and 14 and the disease spreads along the main transport axes from west to east. Conclusions We show that finding evidence for the validity of simulation models of influenza spread by challenging them with seasonal influenza outbreak data is possible and promising. Simulation models for pandemic spread gain more credibility if they are able to reproduce seasonal influenza outbreaks. For more robust modelling of seasonal influenza, serological data complementing sentinel information would be beneficial. PMID:21554680
Evaluating the intersection of a regional wildlife connectivity network with highways
Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth
2013-01-01
Reliable predictions of regional-scale population connectivity are needed to prioritize conservation actions. However, there have been few examples of regional connectivity models that are empirically derived and validated. The central goals of this paper were to (1) evaluate the effectiveness of factorial least cost path corridor mapping on an empirical...
Fairman, Kathleen A; Motheral, Brenda R
2003-01-01
Pharmacoeconomic models of Helicobacter (H) pylori eradication have been frequently cited but never validated. Examine retrospectively whether H pylori pharmacoeconomic models direct decision makers to cost-effective therapeutic choices. We first replicated and then validated 2 models, replacing model assumptions with empirical data from a multipayer claims database. Database subjects were 435 commercially insured U.S. patients treated with bismuthmetronidazole- tetracycline (BMT), proton pump inhibitor (PPI)-clarithromycin, or PPI-amoxicillin. Patients met >1 clinical requirement (ulcer disease, gastritis/duodenitis, stomach function disorder, abdominal pain, H pylori infection, endoscopy, or H pylori assay). Sensitivity analyses included only patients with ulcer diagnosis or gastrointestinal specialist care. Outcome measures were: (1) rates of eradication retreatment; (2) use of office visits, hospitalizations, endoscopies, and antisecretory medication; and (3) cost per effectively treated (nonretreated) patient. Model results overstated the cost-effectiveness of PPI-clarithromycin and underestimated the cost-effectiveness of BMT. Prior to empirical adjustment, costs per effectively treated patient were 1,001 US dollars, 980 US dollars, and 1,730 US dollars for BMT, PPIclarithromycin, and PPI-amoxicillin, respectively. Estimates after adjustment were US dollars for BMT, 1,118 US dollars for PPI-clarithromycin, and 1,131 US dollars for PPI-amoxicillin. Key model assumptions that proved retrospectively incorrect were largely unsupported by either empirical evidence or systematic assessment of expert opinion. Organizations with access to medical and pharmacy claims databases should test key assumptions of influential models to determine their validity. Journal peer-review processes should pay particular attention to the basis of model assumptions.
Sojda, R.S.
2007-01-01
Decision support systems are often not empirically evaluated, especially the underlying modelling components. This can be attributed to such systems necessarily being designed to handle complex and poorly structured problems and decision making. Nonetheless, evaluation is critical and should be focused on empirical testing whenever possible. Verification and validation, in combination, comprise such evaluation. Verification is ensuring that the system is internally complete, coherent, and logical from a modelling and programming perspective. Validation is examining whether the system is realistic and useful to the user or decision maker, and should answer the question: “Was the system successful at addressing its intended purpose?” A rich literature exists on verification and validation of expert systems and other artificial intelligence methods; however, no single evaluation methodology has emerged as preeminent. At least five approaches to validation are feasible. First, under some conditions, decision support system performance can be tested against a preselected gold standard. Second, real-time and historic data sets can be used for comparison with simulated output. Third, panels of experts can be judiciously used, but often are not an option in some ecological domains. Fourth, sensitivity analysis of system outputs in relation to inputs can be informative. Fifth, when validation of a complete system is impossible, examining major components can be substituted, recognizing the potential pitfalls. I provide an example of evaluation of a decision support system for trumpeter swan (Cygnus buccinator) management that I developed using interacting intelligent agents, expert systems, and a queuing system. Predicted swan distributions over a 13-year period were assessed against observed numbers. Population survey numbers and banding (ringing) studies may provide long term data useful in empirical evaluation of decision support.
Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.
Sebok, Angelia; Wickens, Christopher D
2017-03-01
The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.
Using Experiential Methods To Teach about Measurement Validity.
ERIC Educational Resources Information Center
Alderfer, Clayton P.
2003-01-01
Indirectly, instructor behavior provided two models for using a multitrait-multimethod matrix. Students who formulated their own concept, created empirical indicators, and assessed convergent and discriminant validity had better results than those who, influenced by classroom authority dynamics, followed a poorly formulated concept with a…
Zuthi, M F R; Ngo, H H; Guo, W S; Nghiem, L D; Hai, F I; Xia, S Q; Zhang, Z Q; Li, J X
2015-08-01
This study investigates the influence of key biomass parameters on specific oxygen uptake rate (SOUR) in a sponge submerged membrane bioreactor (SSMBR) to develop mathematical models of biomass viability. Extra-cellular polymeric substances (EPS) were considered as a lumped parameter of bound EPS (bEPS) and soluble microbial products (SMP). Statistical analyses of experimental results indicate that the bEPS, SMP, mixed liquor suspended solids and volatile suspended solids (MLSS and MLVSS) have functional relationships with SOUR and their relative influence on SOUR was in the order of EPS>bEPS>SMP>MLVSS/MLSS. Based on correlations among biomass parameters and SOUR, two independent empirical models of biomass viability were developed. The models were validated using results of the SSMBR. However, further validation of the models for different operating conditions is suggested. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extended Empirical Roadside Shadowing model from ACTS mobile measurements
NASA Technical Reports Server (NTRS)
Goldhirsh, Julius; Vogel, Wolfhard
1995-01-01
Employing multiple data bases derived from land-mobile satellite measurements using the Advanced Communications Technology Satellite (ACTS) at 20 GHz, MARECS B-2 at 1.5 GHz, and helicopter measurements at 870 MHz and 1.5 GHz, the Empirical Road Side Shadowing Model (ERS) has been extended. The new model (Extended Empirical Roadside Shadowing Model, EERS) may now be employed at frequencies from UHF to 20 GHz, at elevation angles from 7 to 60 deg and at percentages from 1 to 80 percent (0 dB fade). The EERS distributions are validated against measured ones and fade deviations associated with the model are assessed. A model is also presented for estimating the effects of foliage (or non-foliage) on 20 GHz distributions, given distributions from deciduous trees devoid of leaves (or in full foliage).
Zeng, Liang; Proctor, Robert W; Salvendy, Gavriel
2011-06-01
This research is intended to empirically validate a general model of creative product and service development proposed in the literature. A current research gap inspired construction of a conceptual model to capture fundamental phases and pertinent facilitating metacognitive strategies in the creative design process. The model also depicts the mechanism by which design creativity affects consumer behavior. The validity and assets of this model have not yet been investigated. Four laboratory studies were conducted to demonstrate the value of the proposed cognitive phases and associated metacognitive strategies in the conceptual model. Realistic product and service design problems were used in creativity assessment to ensure ecological validity. Design creativity was enhanced by explicit problem analysis, whereby one formulates problems from different perspectives and at different levels of abstraction. Remote association in conceptual combination spawned more design creativity than did near association. Abstraction led to greater creativity in conducting conceptual expansion than did specificity, which induced mental fixation. Domain-specific knowledge and experience enhanced design creativity, indicating that design can be of a domain-specific nature. Design creativity added integrated value to products and services and positively influenced customer behavior. The validity and value of the proposed conceptual model is supported by empirical findings. The conceptual model of creative design could underpin future theory development. Propositions advanced in this article should provide insights and approaches to facilitate organizations pursuing product and service creativity to gain competitive advantage.
Statistical validity of using ratio variables in human kinetics research.
Liu, Yuanlong; Schutz, Robert W
2003-09-01
The purposes of this study were to investigate the validity of the simple ratio and three alternative deflation models and examine how the variation of the numerator and denominator variables affects the reliability of a ratio variable. A simple ratio and three alternative deflation models were fitted to four empirical data sets, and common criteria were applied to determine the best model for deflation. Intraclass correlation was used to examine the component effect on the reliability of a ratio variable. The results indicate that the validity, of a deflation model depends on the statistical characteristics of the particular component variables used, and an optimal deflation model for all ratio variables may not exist. Therefore, it is recommended that different models be fitted to each empirical data set to determine the best deflation model. It was found that the reliability of a simple ratio is affected by the coefficients of variation and the within- and between-trial correlations between the numerator and denominator variables. It was recommended that researchers should compute the reliability of the derived ratio scores and not assume that strong reliabilities in the numerator and denominator measures automatically lead to high reliability in the ratio measures.
Psychosocial interventions in bipolar disorder: a review.
Lolich, María; Vázquez, Gustavo H; Alvarez, Lina M; Tamayo, Jorge M
2012-01-01
Multiple psychosocial interventions for bipolar disorder have been proposed in recent years. Therefore, we consider that a critical review of empirically validated models would be useful. A review of the literature was conducted in Medline/PubMed for articles published during 2000-2010 that respond to the combination of "bipolar disorder" with the following key words: "psychosocial intervention", "psychoeducational intervention" and "psychotherapy". Cognitive-behavioral, psychoeducational, systematic care models, interpersonal and family therapy interventions were found to be empirically validated. All of them reported significant improvements in therapeutic adherence and in the patients' functionality. Although there are currently several validated psychosocial interventions for treating bipolar disorder, their efficacy needs to be specified in relation to more precise variables such as clinical type, comorbid disorders, stages or duration of the disease. Taking into account these clinical features would enable a proper selection of the most adequate intervention according to the patient's specific characteristics.
DOT National Transportation Integrated Search
2010-08-01
This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...
Development and Validation of the Meaning of Work Inventory among French Workers
ERIC Educational Resources Information Center
Arnoux-Nicolas, Caroline; Sovet, Laurent; Lhotellier, Lin; Bernaud, Jean-Luc
2017-01-01
The purpose of this study was to validate a psychometric instrument among French workers for assessing the meaning of work. Following an empirical framework, a two-step procedure consisted of exploring and then validating the scale among distinctive samples. The consequent Meaning of Work Inventory is a 15-item scale based on a four-factor model,…
Electrostatics of cysteine residues in proteins: Parameterization and validation of a simple model
Salsbury, Freddie R.; Poole, Leslie B.; Fetrow, Jacquelyn S.
2013-01-01
One of the most popular and simple models for the calculation of pKas from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pKas. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pKas; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pKas. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pKa values (where the calculation should reproduce the pKa within experimental error). Both the general behavior of cysteines in proteins and the perturbed pKa in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pKa should be shifted, and validation of force field parameters for cysteine residues. PMID:22777874
The Development of an Empirical Model of Mental Health Stigma in Adolescents.
Silke, Charlotte; Swords, Lorraine; Heary, Caroline
2016-08-30
Research on mental health stigma in adolescents is hampered by a lack of empirical investigation into the theoretical conceptualisation of stigma, as well as by the lack of validated stigma measures. This research aims to develop a model of public stigma toward depression in adolescents and to use this model to empirically examine whether stigma is composed of three separate dimensions (Stereotypes, Prejudice and Discrimination), as is theoretically proposed. Adolescents completed self-report measures assessing their stigmatising responses toward a fictional peer with depression. An exploratory factor analysis (EFA; N=332) was carried out on 58-items, which proposed to measure aspects of stigma. A confirmatory factor analysis (CFA; N=236) was then carried out to evaluate the validity of the observed stigma model. Finally, higher-order CFAs were conducted in order to assess whether the observed model supported the tripartite conceptualisation of stigma. The EFA returned a seven-factor model of stigma. These factors were designated as Dangerousness, Warmth & Competency, Responsibility, Negative Attributes, Prejudice, Classroom Discrimination and Friendship Discrimination. The CFA supported the goodness-of-fit of this seven-factor model. The higher-order CFAs indicated that these seven factors represented the latent constructs of, Stereotypes, Prejudice and Discrimination, which in turn represented Stigma. Overall, results support the tripartite conceptualisation of stigma and suggest that measurements of mental health stigma in adolescents should include assessments of all three dimensions. These results also highlight the importance of establishing valid and reliable measures for assessing stigma in adolescents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Alladin, Assen; Sabatini, Linda; Amundson, Jon K
2007-04-01
This paper briefly surveys the trend of and controversy surrounding empirical validation in psychotherapy. Empirical validation of hypnotherapy has paralleled the practice of validation in psychotherapy and the professionalization of clinical psychology, in general. This evolution in determining what counts as evidence for bona fide clinical practice has gone from theory-driven clinical approaches in the 1960s and 1970s through critical attempts at categorization of empirically supported therapies in the 1990s on to the concept of evidence-based practice in 2006. Implications of this progression in professional psychology are discussed in the light of hypnosis's current quest for validation and empirical accreditation.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
ERIC Educational Resources Information Center
Gwaltney, Kevin Dale
2012-01-01
This effort: 1) establishes an autonomy definition uniquely tailored for teaching, 2) validates a nationally generalizable teacher autonomy construct, 3) demonstrates that the model describes and explains the autonomy levels of particular teacher groups, and 4) verifies the construct can represent teacher autonomy in other empirical models. The…
VERIFICATION AND VALIDATION OF THE SPARC MODEL
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...
DEVELOPMENT OF THE VIRTUAL BEACH MODEL, PHASE 1: AN EMPIRICAL MODEL
With increasing attention focused on the use of multiple linear regression (MLR) modeling of beach fecal bacteria concentration, the validity of the entire statistical process should be carefully evaluated to assure satisfactory predictions. This work aims to identify pitfalls an...
NASA Astrophysics Data System (ADS)
Song, S. G.
2016-12-01
Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study
IMPACT: a generic tool for modelling and simulating public health policy.
Ainsworth, J D; Carruthers, E; Couch, P; Green, N; O'Flaherty, M; Sperrin, M; Williams, R; Asghar, Z; Capewell, S; Buchan, I E
2011-01-01
Populations are under-served by local health policies and management of resources. This partly reflects a lack of realistically complex models to enable appraisal of a wide range of potential options. Rising computing power coupled with advances in machine learning and healthcare information now enables such models to be constructed and executed. However, such models are not generally accessible to public health practitioners who often lack the requisite technical knowledge or skills. To design and develop a system for creating, executing and analysing the results of simulated public health and healthcare policy interventions, in ways that are accessible and usable by modellers and policy-makers. The system requirements were captured and analysed in parallel with the statistical method development for the simulation engine. From the resulting software requirement specification the system architecture was designed, implemented and tested. A model for Coronary Heart Disease (CHD) was created and validated against empirical data. The system was successfully used to create and validate the CHD model. The initial validation results show concordance between the simulation results and the empirical data. We have demonstrated the ability to connect health policy-modellers and policy-makers in a unified system, thereby making population health models easier to share, maintain, reuse and deploy.
Baczyńska, Anna K.; Rowiński, Tomasz; Cybis, Natalia
2016-01-01
Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach’s alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed. PMID:27014111
Soil Moisture Estimate Under Forest Using a Semi-Empirical Model at P-Band
NASA Technical Reports Server (NTRS)
Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak
2013-01-01
Here we present the result of a semi-empirical inversion model for soil moisture retrieval using the three backscattering coefficients: sigma(sub HH), sigma(sub VV) and sigma(sub HV). In this paper we focus on the soil moisture estimate and use the biomass as an ancillary parameter estimated automatically from the algorithm and used as a validation parameter, We will first remind the model analytical formulation. Then we will sow some results obtained with real SAR data and compare them to ground estimates.
E. Gregory McPherson; Paula J. Peper
2012-01-01
This paper describes three long-term tree growth studies conducted to evaluate tree performance because repeated measurements of the same trees produce critical data for growth model calibration and validation. Several empirical and process-based approaches to modeling tree growth are reviewed. Modeling is more advanced in the fields of forestry and...
Bobovská, Adela; Tvaroška, Igor; Kóňa, Juraj
2016-05-01
Human Golgi α-mannosidase II (GMII), a zinc ion co-factor dependent glycoside hydrolase (E.C.3.2.1.114), is a pharmaceutical target for the design of inhibitors with anti-cancer activity. The discovery of an effective inhibitor is complicated by the fact that all known potent inhibitors of GMII are involved in unwanted co-inhibition with lysosomal α-mannosidase (LMan, E.C.3.2.1.24), a relative to GMII. Routine empirical QSAR models for both GMII and LMan did not work with a required accuracy. Therefore, we have developed a fast computational protocol to build predictive models combining interaction energy descriptors from an empirical docking scoring function (Glide-Schrödinger), Linear Interaction Energy (LIE) method, and quantum mechanical density functional theory (QM-DFT) calculations. The QSAR models were built and validated with a library of structurally diverse GMII and LMan inhibitors and non-active compounds. A critical role of QM-DFT descriptors for the more accurate prediction abilities of the models is demonstrated. The predictive ability of the models was significantly improved when going from the empirical docking scoring function to mixed empirical-QM-DFT QSAR models (Q(2)=0.78-0.86 when cross-validation procedures were carried out; and R(2)=0.81-0.83 for a testing set). The average error for the predicted ΔGbind decreased to 0.8-1.1kcalmol(-1). Also, 76-80% of non-active compounds were successfully filtered out from GMII and LMan inhibitors. The QSAR models with the fragmented QM-DFT descriptors may find a useful application in structure-based drug design where pure empirical and force field methods reached their limits and where quantum mechanics effects are critical for ligand-receptor interactions. The optimized models will apply in lead optimization processes for GMII drug developments. Copyright © 2016 Elsevier Inc. All rights reserved.
Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.
Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J
2013-04-01
We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.
AGENT-BASED MODELS IN EMPIRICAL SOCIAL RESEARCH*
Bruch, Elizabeth; Atwell, Jon
2014-01-01
Agent-based modeling has become increasingly popular in recent years, but there is still no codified set of recommendations or practices for how to use these models within a program of empirical research. This article provides ideas and practical guidelines drawn from sociology, biology, computer science, epidemiology, and statistics. We first discuss the motivations for using agent-based models in both basic science and policy-oriented social research. Next, we provide an overview of methods and strategies for incorporating data on behavior and populations into agent-based models, and review techniques for validating and testing the sensitivity of agent-based models. We close with suggested directions for future research. PMID:25983351
Validation of a new plasmapause model derived from CHAMP field-aligned current signatures
NASA Astrophysics Data System (ADS)
Heilig, Balázs; Darrouzet, Fabien; Vellante, Massimo; Lichtenberger, János; Lühr, Hermann
2014-05-01
Recently a new model for the plasmapause location in the equatorial plane was introduced based on magnetic field observations made by the CHAMP satellite in the topside ionosphere (Heilig and Lühr, 2013). Related signals are medium-scale field-aligned currents (MSFAC) (some 10km scale size). An empirical model for the MSFAC boundary was developed as a function of Kp and MLT. The MSFAC model then was compared to in situ plasmapause observations of IMAGE RPI. By considering this systematic displacement resulting from this comparison and by taking into account the diurnal variation and Kp-dependence of the residuals an empirical model of the plasmapause location that is based on MSFAC measurements from CHAMP was constructed. As a first step toward validation of the new plasmapause model we used in-situ (Van Allen Probes/EMFISIS, Cluster/WHISPER) and ground based (EMMA) plasma density observations. Preliminary results show a good agreement in general between the model and observations. Some observed differences stem from the different definitions of the plasmapause. A more detailed validation of the method can take place as soon as SWARM and VAP data become available. Heilig, B., and H. Lühr (2013) New plasmapause model derived from CHAMP field-aligned current signatures, Ann. Geophys., 31, 529-539, doi:10.5194/angeo-31-529-2013
Validating an operational physical method to compute surface radiation from geostationary satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Manajit; Dhere, Neelkanth G.; Wohlgemuth, John H.
We developed models to compute global horizontal irradiance (GHI) and direct normal irradiance (DNI) over the last three decades. These models can be classified as empirical or physical based on the approach. Empirical models relate ground-based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the physics behind the radiation received at the satellite and create retrievals to estimate surface radiation. Furthermore, while empirical methods have been traditionally used for computing surface radiation for the solar energy industry, the advent of faster computing has made operational physical models viable. The Global Solar Insolation Projectmore » (GSIP) is a physical model that computes DNI and GHI using the visible and infrared channel measurements from a weather satellite. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate GHI and DNI. Developed for polar orbiting satellites, GSIP has been adapted to NOAA's Geostationary Operation Environmental Satellite series and can run operationally at high spatial resolutions. Our method holds the possibility of creating high quality datasets of GHI and DNI for use by the solar energy industry. We present an outline of the methodology and results from running the model as well as a validation study using ground-based instruments.« less
ERIC Educational Resources Information Center
Marchant, Michelle; Heath, Melissa Allen; Miramontes, Nancy Y.
2013-01-01
Criteria for evaluating behavior support programs are changing. Consumer-based educational and behavioral programs, such as School-Wide Positive Behavior Support (SWPBS), are particularly influenced by consumer opinion. Unfortunately, the need for and use of social validity measures have not received adequate attention in the empirical literature…
ERIC Educational Resources Information Center
Engdahl, Ryan M.; Elhai, Jon D.; Richardson, J. Don; Frueh, B. Christopher
2011-01-01
We tested two empirically validated 4-factor models of posttraumatic stress disorder (PTSD) symptoms using the PTSD Checklist: King, Leskin, King, and Weathers' (1998) model including reexperiencing, avoidance, emotional numbing, and hyperarousal factors, and Simms, Watson, and Doebbeling's (2002) model including reexperiencing, avoidance,…
DOT National Transportation Integrated Search
2014-08-01
Midwest States Accelerated Pavement Testing Pooled-Fund Program, financed by the : highway departments of Kansas, Iowa, and Missouri, has supported an accelerated : pavement testing (APT) project to validate several models incorporated in the NCHRP :...
DOT National Transportation Integrated Search
2014-08-01
The Midwest States Accelerated Pavement Testing Pooled Fund Program, financed by the highway : departments of Kansas, Iowa, and Missouri, has supported an accelerated pavement testing (APT) project to : validate several models incorporated in the NCH...
Electrostatics of cysteine residues in proteins: parameterization and validation of a simple model.
Salsbury, Freddie R; Poole, Leslie B; Fetrow, Jacquelyn S
2012-11-01
One of the most popular and simple models for the calculation of pK(a) s from a protein structure is the semi-macroscopic electrostatic model MEAD. This model requires empirical parameters for each residue to calculate pK(a) s. Analysis of current, widely used empirical parameters for cysteine residues showed that they did not reproduce expected cysteine pK(a) s; thus, we set out to identify parameters consistent with the CHARMM27 force field that capture both the behavior of typical cysteines in proteins and the behavior of cysteines which have perturbed pK(a) s. The new parameters were validated in three ways: (1) calculation across a large set of typical cysteines in proteins (where the calculations are expected to reproduce expected ensemble behavior); (2) calculation across a set of perturbed cysteines in proteins (where the calculations are expected to reproduce the shifted ensemble behavior); and (3) comparison to experimentally determined pK(a) values (where the calculation should reproduce the pK(a) within experimental error). Both the general behavior of cysteines in proteins and the perturbed pK(a) in some proteins can be predicted reasonably well using the newly determined empirical parameters within the MEAD model for protein electrostatics. This study provides the first general analysis of the electrostatics of cysteines in proteins, with specific attention paid to capturing both the behavior of typical cysteines in a protein and the behavior of cysteines whose pK(a) should be shifted, and validation of force field parameters for cysteine residues. Copyright © 2012 Wiley Periodicals, Inc.
Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine
2017-10-01
Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Bröder, A
2000-09-01
The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.
Measuring metacognitive ability based on science literacy in dynamic electricity topic
NASA Astrophysics Data System (ADS)
Warni; Sunyono; Rosidin
2018-01-01
This study aims to produce an instrument of metacognition ability assessment based on science literacy on theoretically and empirically feasible dynamic electrical material. The feasibility of the assessment instrument includes theoretical validity on material, construction, and language aspects, as well as empirical validity, reliability, difficulty, distinguishing, and distractor indices. The development of assessment instruments refers to the Dick and Carey development model which includes the preliminary study stage, initial product development, validation and revision, and piloting. The instrument was tested to 32 students of class IX in SMP Negeri 20 Bandar Lampung, using the design of One Group Pretest-Postest Design. The result shows that the metacognition ability assessment instrument based on science literacy is feasible theoretically with theoretical validity percentage of 95.44% and empirical validity of 43.75% for the high category, 43.75% for the medium category, and 12.50 % for low category questions; Reliability of assessment instruments of 0.83 high categories; Difficulty level of difficult item is about 31.25% and medium category is equal to 68.75%. Item that has very good distinguishing power is 12.50%, 62.50% for good stage, and medium category is 25.00%; As well as the duplexing function on a matter of multiple choice is 80.00% including good category and 20.00% for medium category.
Verleker, Akshay Prabhu; Shaffer, Michael; Fang, Qianqian; Choi, Mi-Ran; Clare, Susan; Stantz, Keith M
2016-12-01
A three-dimensional photon dosimetry in tissues is critical in designing optical therapeutic protocols to trigger light-activated drug release. The objective of this study is to investigate the feasibility of a Monte Carlo-based optical therapy planning software by developing dosimetry tools to characterize and cross-validate the local photon fluence in brain tissue, as part of a long-term strategy to quantify the effects of photoactivated drug release in brain tumors. An existing GPU-based 3D Monte Carlo (MC) code was modified to simulate near-infrared photon transport with differing laser beam profiles within phantoms of skull bone (B), white matter (WM), and gray matter (GM). A novel titanium-based optical dosimetry probe with isotropic acceptance was used to validate the local photon fluence, and an empirical model of photon transport was developed to significantly decrease execution time for clinical application. Comparisons between the MC and the dosimetry probe measurements were on an average 11.27%, 13.25%, and 11.81% along the illumination beam axis, and 9.4%, 12.06%, 8.91% perpendicular to the beam axis for WM, GM, and B phantoms, respectively. For a heterogeneous head phantom, the measured % errors were 17.71% and 18.04% along and perpendicular to beam axis. The empirical algorithm was validated by probe measurements and matched the MC results (R20.99), with average % error of 10.1%, 45.2%, and 22.1% relative to probe measurements, and 22.6%, 35.8%, and 21.9% relative to the MC, for WM, GM, and B phantoms, respectively. The simulation time for the empirical model was 6 s versus 8 h for the GPU-based Monte Carlo for a head phantom simulation. These tools provide the capability to develop and optimize treatment plans for optimal release of pharmaceuticals in the treatment of cancer. Future work will test and validate these novel delivery and release mechanisms in vivo.
Questionable Validity of Poisson Assumptions in a Combined Loglinear/MDS Mapping Model.
ERIC Educational Resources Information Center
Gleason, John M.
1993-01-01
This response to an earlier article on a combined log-linear/MDS model for mapping journals by citation analysis discusses the underlying assumptions of the Poisson model with respect to characteristics of the citation process. The importance of empirical data analysis is also addressed. (nine references) (LRW)
A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.
Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko
2015-10-30
Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.
A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems
Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko
2015-01-01
Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982
Zhao, Xueli; Arsenault, Andre; Lavoie, Kim L; Meloche, Bernard; Bacon, Simon L
2007-01-01
Forearm Endothelial Function (FEF) is a marker that has been shown to discriminate patients with cardiovascular disease (CVD). FEF has been assessed using several parameters: the Rate of Uptake Ratio (RUR), EWUR (Elbow-to-Wrist Uptake Ratio) and EWRUR (Elbow-to-Wrist Relative Uptake Ratio). However, the modeling functions of FEF require more robust models. The present study was designed to compare an empirical method with quantitative modeling techniques to better estimate the physiological parameters and understand the complex dynamic processes. The fitted time activity curves of the forearms, estimating blood and muscle components, were assessed using both an empirical method and a two-compartment model. Although correlational analyses suggested a good correlation between the methods for RUR (r=.90) and EWUR (r=.79), but not EWRUR (r=.34), Altman-Bland plots found poor agreement between the methods for all 3 parameters. These results indicate that there is a large discrepancy between the empirical and computational method for FEF. Further work is needed to establish the physiological and mathematical validity of the 2 modeling methods.
Testing the validity of the International Atomic Energy Agency (IAEA) safety culture model.
López de Castro, Borja; Gracia, Francisco J; Peiró, José M; Pietrantoni, Luca; Hernández, Ana
2013-11-01
This paper takes the first steps to empirically validate the widely used model of safety culture of the International Atomic Energy Agency (IAEA), composed of five dimensions, further specified by 37 attributes. To do so, three independent and complementary studies are presented. First, 290 students serve to collect evidence about the face validity of the model. Second, 48 experts in organizational behavior judge its content validity. And third, 468 workers in a Spanish nuclear power plant help to reveal how closely the theoretical five-dimensional model can be replicated. Our findings suggest that several attributes of the model may not be related to their corresponding dimensions. According to our results, a one-dimensional structure fits the data better than the five dimensions proposed by the IAEA. Moreover, the IAEA model, as it stands, seems to have rather moderate content validity and low face validity. Practical implications for researchers and practitioners are included. Copyright © 2013 Elsevier Ltd. All rights reserved.
Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina
2016-01-01
A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.
A simple empirical model for the clarification-thickening process in wastewater treatment plants.
Zhang, Y K; Wang, H C; Qi, L; Liu, G H; He, Z J; Fan, H T
2015-01-01
In wastewater treatment plants (WWTPs), activated sludge is thickened in secondary settling tanks and recycled into the biological reactor to maintain enough biomass for wastewater treatment. Accurately estimating the activated sludge concentration in the lower portion of the secondary clarifiers is of great importance for evaluating and controlling the sludge recycled ratio, ensuring smooth and efficient operation of the WWTP. By dividing the overall activated sludge-thickening curve into a hindered zone and a compression zone, an empirical model describing activated sludge thickening in the compression zone was obtained by empirical regression. This empirical model was developed through experiments conducted using sludge from five WWTPs, and validated by the measured data from a sixth WWTP, which fit the model well (R² = 0.98, p < 0.001). The model requires application of only one parameter, the sludge volume index (SVI), which is readily incorporated into routine analysis. By combining this model with the conservation of mass equation, an empirical model for compression settling was also developed. Finally, the effects of denitrification and addition of a polymer were also analysed because of their effect on sludge thickening, which can be useful for WWTP operation, e.g., improving wastewater treatment or the proper use of the polymer.
Empirical measurement and model validation of infrared spectra of contaminated surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean; Gartley, Michael; Kerekes, John; Cosofret, Bogdon; Giblin, Jay
2015-05-01
Liquid-contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) Model utilizes radiative transfer modeling to generate synthetic imagery. Within DIRSIG, a micro-scale surface property model (microDIRSIG) was used to calculate numerical bidirectional reflectance distribution functions (BRDF) of geometric surfaces with applied concentrations of liquid contamination. Simple cases where the liquid contamination was well described by optical constants on optically at surfaces were first analytically evaluated by ray tracing and modeled within microDIRSIG. More complex combinations of surface geometry and contaminant application were then incorporated into the micro-scale model. The computed microDIRSIG BRDF outputs were used to describe surface material properties in the encompassing DIRSIG simulation. These DIRSIG generated outputs were validated with empirical measurements obtained from a Design and Prototypes (D&P) Model 102 FTIR spectrometer. Infrared spectra from the synthetic imagery and the empirical measurements were iteratively compared to identify quantitative spectral similarity between the measured data and modeled outputs. Several spectral angles between the predicted and measured emissivities differed by less than 1 degree. Synthetic radiance spectra produced from the microDIRSIG/DIRSIG combination had a RMS error of 0.21-0.81 watts/(m2-sr-μm) when compared to the D&P measurements. Results from this comparison will facilitate improved methods for identifying spectral features and detecting liquid contamination on a variety of natural surfaces.
ERIC Educational Resources Information Center
Daigneault, Pierre-Marc; Jacob, Steve; Tremblay, Joel
2012-01-01
Background: Stakeholder participation is an important trend in the field of program evaluation. Although a few measurement instruments have been proposed, they either have not been empirically validated or do not cover the full content of the concept. Objectives: This study consists of a first empirical validation of a measurement instrument that…
Boer, H M T; Butler, S T; Stötzel, C; Te Pas, M F W; Veerkamp, R F; Woelders, H
2017-11-01
A recently developed mechanistic mathematical model of the bovine estrous cycle was parameterized to fit empirical data sets collected during one estrous cycle of 31 individual cows, with the main objective to further validate the model. The a priori criteria for validation were (1) the resulting model can simulate the measured data correctly (i.e. goodness of fit), and (2) this is achieved without needing extreme, probably non-physiological parameter values. We used a least squares optimization procedure to identify parameter configurations for the mathematical model to fit the empirical in vivo measurements of follicle and corpus luteum sizes, and the plasma concentrations of progesterone, estradiol, FSH and LH for each cow. The model was capable of accommodating normal variation in estrous cycle characteristics of individual cows. With the parameter sets estimated for the individual cows, the model behavior changed for 21 cows, with improved fit of the simulated output curves for 18 of these 21 cows. Moreover, the number of follicular waves was predicted correctly for 18 of the 25 two-wave and three-wave cows, without extreme parameter value changes. Estimation of specific parameters confirmed results of previous model simulations indicating that parameters involved in luteolytic signaling are very important for regulation of general estrous cycle characteristics, and are likely responsible for differences in estrous cycle characteristics between cows.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil
2014-08-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.
Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods
Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil
2015-01-01
We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922
Model Identification in Time-Series Analysis: Some Empirical Results.
ERIC Educational Resources Information Center
Padia, William L.
Model identification of time-series data is essential to valid statistical tests of intervention effects. Model identification is, at best, inexact in the social and behavioral sciences where one is often confronted with small numbers of observations. These problems are discussed, and the results of independent identifications of 130 social and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larmat, Carene; Rougier, Esteban; Lei, Zhou
This project is in support of the Source Physics Experiment SPE (Snelson et al. 2013), which aims to develop new seismic source models of explosions. One priority of this program is first principle numerical modeling to validate and extend current empirical models.
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
SPECTRAL data-based estimation of soil heat flux
Singh, Ramesh K.; Irmak, A.; Walter-Shea, Elizabeth; Verma, S.B.; Suyker, A.E.
2011-01-01
Numerous existing spectral-based soil heat flux (G) models have shown wide variation in performance for maize and soybean cropping systems in Nebraska, indicating the need for localized calibration and model development. The objectives of this article are to develop a semi-empirical model to estimate G from a normalized difference vegetation index (NDVI) and net radiation (Rn) for maize (Zea mays L.) and soybean (Glycine max L.) fields in the Great Plains, and present the suitability of the developed model to estimate G under similar and different soil and management conditions. Soil heat fluxes measured in both irrigated and rainfed fields in eastern and south-central Nebraska were used for model development and validation. An exponential model that uses NDVI and Rn was found to be the best to estimate G based on r2 values. The effect of geographic location, crop, and water management practices were used to develop semi-empirical models under four case studies. Each case study has the same exponential model structure but a different set of coefficients and exponents to represent the crop, soil, and management practices. Results showed that the semi-empirical models can be used effectively for G estimation for nearby fields with similar soil properties for independent years, regardless of differences in crop type, crop rotation, and irrigation practices, provided that the crop residue from the previous year is more than 4000 kg ha-1. The coefficients calibrated from particular fields can be used at nearby fields in order to capture temporal variation in G. However, there is a need for further investigation of the models to account for the interaction effects of crop rotation and irrigation. Validation at an independent site having different soil and crop management practices showed the limitation of the semi-empirical model in estimating G under different soil and environment conditions.
New Multiple-Choice Measures of Historical Thinking: An Investigation of Cognitive Validity
ERIC Educational Resources Information Center
Smith, Mark D.
2018-01-01
History education scholars have recognized the need for test validity research in recent years and have called for empirical studies that explore how to best measure historical thinking processes. The present study was designed to help answer this call and to provide a model that others can adapt to carry this line of research forward. It employed…
NREL: International Activities - Bhutan Resource Maps
modeling approach along with NREL's empirical validation methodology. The high-resolution (10-km) annual -time specific solar mapping approach developed at the U.S. State University of New York at Albany. Data
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.
Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D
2011-08-01
The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.
2011-01-01
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen
2011-08-15
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less
Validation of Slosh Modeling Approach Using STAR-CCM+
NASA Technical Reports Server (NTRS)
Benson, David J.; Ng, Wanyi
2018-01-01
Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.
Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H
2015-01-01
Introduction: Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. Methods: In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3–9%), or no dehydration (<3%). Clinical variables were then entered into logistic regression and recursive partitioning models to develop the DHAKA Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. Results: A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good reproducibility using bootstrapping. Conclusion: This study is the first to empirically derive and internally validate accurate and reliable clinical diagnostic models for dehydration in a resource-limited setting. After external validation, frontline providers may use these new tools to better manage acute diarrhea in children. PMID:26374802
Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Rege, Soham; Chu, Chieh; Schmid, Christopher H; Alam, Nur H
2015-08-18
Diarrhea remains one of the most common and most deadly conditions affecting children worldwide. Accurately assessing dehydration status is critical to determining treatment course, yet no clinical diagnostic models for dehydration have been empirically derived and validated for use in resource-limited settings. In the Dehydration: Assessing Kids Accurately (DHAKA) prospective cohort study, a random sample of children under 5 with acute diarrhea was enrolled between February and June 2014 in Bangladesh. Local nurses assessed children for clinical signs of dehydration on arrival, and then serial weights were obtained as subjects were rehydrated. For each child, the percent weight change with rehydration was used to classify subjects with severe dehydration (>9% weight change), some dehydration (3-9%), or no dehydration (<3%). Clinical variables were then entered into logistic regression and recursive partitioning models to develop the DHAKA Dehydration Score and DHAKA Dehydration Tree, respectively. Models were assessed for their accuracy using the area under their receiver operating characteristic curve (AUC) and for their reliability through repeat clinical exams. Bootstrapping was used to internally validate the models. A total of 850 children were enrolled, with 771 included in the final analysis. Of the 771 children included in the analysis, 11% were classified with severe dehydration, 45% with some dehydration, and 44% with no dehydration. Both the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant AUCs of 0.79 (95% CI = 0.74, 0.84) and 0.76 (95% CI = 0.71, 0.80), respectively, for the diagnosis of severe dehydration. Additionally, the DHAKA Dehydration Score and DHAKA Dehydration Tree had significant positive likelihood ratios of 2.0 (95% CI = 1.8, 2.3) and 2.5 (95% CI = 2.1, 2.8), respectively, and significant negative likelihood ratios of 0.23 (95% CI = 0.13, 0.40) and 0.28 (95% CI = 0.18, 0.44), respectively, for the diagnosis of severe dehydration. Both models demonstrated 90% agreement between independent raters and good reproducibility using bootstrapping. This study is the first to empirically derive and internally validate accurate and reliable clinical diagnostic models for dehydration in a resource-limited setting. After external validation, frontline providers may use these new tools to better manage acute diarrhea in children. © Levine et al.
DOT National Transportation Integrated Search
2009-11-01
The development of the Mechanistic-Empirical Pavement Design Guide (MEPDG) under National Cooperative Highway Research Program (NCHRP) projects 1-37A and 1-40D has significantly improved the ability of pavement designers to model and simulate the eff...
Component-based model to predict aerodynamic noise from high-speed train pantographs
NASA Astrophysics Data System (ADS)
Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.
2017-04-01
At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Modeling the erythemal surface diffuse irradiance fraction for Badajoz, Spain
NASA Astrophysics Data System (ADS)
Sanchez, Guadalupe; Serrano, Antonio; Cancillo, María Luisa
2017-10-01
Despite its important role on the human health and numerous biological processes, the diffuse component of the erythemal ultraviolet irradiance (UVER) is scarcely measured at standard radiometric stations and therefore needs to be estimated. This study proposes and compares 10 empirical models to estimate the UVER diffuse fraction. These models are inspired from mathematical expressions originally used to estimate total diffuse fraction, but, in this study, they are applied to the UVER case and tested against experimental measurements. In addition to adapting to the UVER range the various independent variables involved in these models, the total ozone column has been added in order to account for its strong impact on the attenuation of ultraviolet radiation. The proposed models are fitted to experimental measurements and validated against an independent subset. The best-performing model (RAU3) is based on a model proposed by Ruiz-Arias et al. (2010) and shows values of r2 equal to 0.91 and relative root-mean-square error (rRMSE) equal to 6.1 %. The performance achieved by this entirely empirical model is better than those obtained by previous semi-empirical approaches and therefore needs no additional information from other physically based models. This study expands on previous research to the ultraviolet range and provides reliable empirical models to accurately estimate the UVER diffuse fraction.
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M
2015-09-01
The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.
ERIC Educational Resources Information Center
Guan, Jianmin; McBride, Ron; Xiang, Ping
2007-01-01
Although empirical research in academic areas provides support for both a 3-factor as well as a 4-factor achievement goal model, both models were proposed and tested with a collegiate sample. Little is known about the generalizability of either model with high school level samples. This study was designed to examine whether the 3-factor model…
Investigation of a Nonparametric Procedure for Assessing Goodness-of-Fit in Item Response Theory
ERIC Educational Resources Information Center
Wells, Craig S.; Bolt, Daniel M.
2008-01-01
Tests of model misfit are often performed to validate the use of a particular model in item response theory. Douglas and Cohen (2001) introduced a general nonparametric approach for detecting misfit under the two-parameter logistic model. However, the statistical properties of their approach, and empirical comparisons to other methods, have not…
Validation of a Global Hydrodynamic Flood Inundation Model
NASA Astrophysics Data System (ADS)
Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.
2014-12-01
In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.
Modeling, simulation, and estimation of optical turbulence
NASA Astrophysics Data System (ADS)
Formwalt, Byron Paul
This dissertation documents three new contributions to simulation and modeling of optical turbulence. The first contribution is the formalization, optimization, and validation of a modeling technique called successively conditioned rendering (SCR). The SCR technique is empirically validated by comparing the statistical error of random phase screens generated with the technique. The second contribution is the derivation of the covariance delineation theorem, which provides theoretical bounds on the error associated with SCR. It is shown empirically that the theoretical bound may be used to predict relative algorithm performance. Therefore, the covariance delineation theorem is a powerful tool for optimizing SCR algorithms. For the third contribution, we introduce a new method for passively estimating optical turbulence parameters, and demonstrate the method using experimental data. The technique was demonstrated experimentally, using a 100 m horizontal path at 1.25 m above sun-heated tarmac on a clear afternoon. For this experiment, we estimated C2n ≈ 6.01 · 10-9 m-23 , l0 ≈ 17.9 mm, and L0 ≈ 15.5 m.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billman, L.; Keyser, D.
The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less
Quality of College Life (QCL) of Students: Developing and Validating a Measure of Well-Being
ERIC Educational Resources Information Center
Sirgy, M. Joseph; Grzeskowiak, Stephan; Rahtz, Don
2007-01-01
This paper reports a study designed to develop and validate a measure of quality of college life (QCL) of students. Using a theoretical model based on a build-up approach to QCL, the authors provide an empirical examination of various hierarchical components and their properties. The method is executed in two stages. The first stage is used to…
Numerical Simulations of Flow Separation Control in Low-Pressure Turbines using Plasma Actuators
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.; Ashpis, D. E.
2007-01-01
A recently introduced phenomenological model to simulate flow control applications using plasma actuators has been further developed and improved in order to expand its use to complicated actuator geometries. The new modeling approach eliminates the requirement of an empirical charge density distribution shape by using the embedded electrode as a source for the charge density. The resulting model is validated against a flat plate experiment with quiescent environment. The modeling approach incorporates the effect of the plasma actuators on the external flow into Navier Stokes computations as a body force vector which is obtained as a product of the net charge density and the electric field. The model solves the Maxwell equation to obtain the electric field due to the applied AC voltage at the electrodes and an additional equation for the charge density distribution representing the plasma density. The new modeling approach solves the charge density equation in the computational domain assuming the embedded electrode as a source therefore automatically generating a charge density distribution on the surface exposed to the flow similar to that observed in the experiments without explicitly specifying an empirical distribution. The model is validated against a flat plate experiment with quiescent environment.
Base drag prediction on missile configurations
NASA Technical Reports Server (NTRS)
Moore, F. G.; Hymer, T.; Wilcox, F.
1993-01-01
New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.
Markov modeling and discrete event simulation in health care: a systematic comparison.
Standfield, Lachlan; Comans, Tracy; Scuffham, Paul
2014-04-01
The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.
Development and Validation of the Five-by-Five Resilience Scale.
DeSimone, Justin A; Harms, P D; Vanhove, Adam J; Herian, Mitchel N
2017-09-01
This article introduces a new measure of resilience and five related protective factors. The Five-by-Five Resilience Scale (5×5RS) is developed on the basis of theoretical and empirical considerations. Two samples ( N = 475 and N = 613) are used to assess the factor structure, reliability, convergent validity, and criterion-related validity of the 5×5RS. Confirmatory factor analysis supports a bifactor model. The 5×5RS demonstrates adequate internal consistency as evidenced by Cronbach's alpha and empirical reliability estimates. The 5×5RS correlates positively with the Connor-Davidson Resilience Scale (CD-RISC), a commonly used measure of resilience. The 5×5RS exhibits similar criterion-related validity to the CD-RISC as evidenced by positive correlations with satisfaction with life, meaning in life, and secure attachment style as well as negative correlations with rumination and anxious or avoidant attachment styles. 5×5RS scores are positively correlated with healthy behaviors such as exercise and negatively correlated with sleep difficulty and symptomology of anxiety and depression. The 5×5RS incrementally explains variance in some criteria above and beyond the CD-RISC. Item responses are modeled using the graded response model. Information estimates demonstrate the ability of the 5×5RS to assess individuals within at least one standard deviation of the mean on relevant latent traits.
2018-04-01
empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some
Stoltenberg, Scott F.; Nag, Parthasarathi
2010-01-01
Despite more than a decade of empirical work on the role of genetic polymorphisms in the serotonin system on behavior, the details across levels of analysis are not well understood. We describe a mathematical model of the genetic control of presynaptic serotonergic function that is based on control theory, implemented using systems of differential equations, and focused on better characterizing pathways from genes to behavior. We present the results of model validation tests that include the comparison of simulation outcomes with empirical data on genetic effects on brain response to affective stimuli and on impulsivity. Patterns of simulated neural firing were consistent with recent findings of additive effects of serotonin transporter and tryptophan hydroxylase-2 polymorphisms on brain activation. In addition, simulated levels of cerebral spinal fluid 5-hydroxyindoleacetic acid (CSF 5-HIAA) were negatively correlated with Barratt Impulsiveness Scale (Version 11) Total scores in college students (r = −.22, p = .002, N = 187), which is consistent with the well-established negative correlation between CSF 5-HIAA and impulsivity. The results of the validation tests suggest that the model captures important aspects of the genetic control of presynaptic serotonergic function and behavior via brain activation. The proposed model can be: (1) extended to include other system components, neurotransmitter systems, behaviors and environmental influences; (2) used to generate testable hypotheses. PMID:20111992
Aggregate Timber Supply: From the Forest to the Market
David N. Wear; Subhrendu K. Pattanayak
2003-01-01
Timber supply modeling is a means of formalizing the production behavior of heterogeneous landowners managing a wide variety of forest types and vintages within a region. The critical challenge of timber supply modeling is constructing theoretically valid and empirically practical aggregate descriptions of harvest behavior. Understanding timber supply is essential for...
ERIC Educational Resources Information Center
Hiver, Phil
2017-01-01
This article describes a validation study using Retrodictive Qualitative Modeling, a framework for conducting research from a dynamic and situated perspective, to establish an empirical foundation for a new phenomenological construct--language teacher immunity. Focus groups (N = 44) conducted with second language (L2) practitioners and teacher…
Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.
2012-08-01
Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less
Construct validity of the Moral Development Scale for Professionals (MDSP).
Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina
2011-01-01
The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg's theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg's theory.
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. W.; Hood, Raleigh R.; Long, Wen
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less
Quantitative evaluation of simulated functional brain networks in graph theoretical analysis.
Lee, Won Hee; Bullmore, Ed; Frangou, Sophia
2017-02-01
There is increasing interest in the potential of whole-brain computational models to provide mechanistic insights into resting-state brain networks. It is therefore important to determine the degree to which computational models reproduce the topological features of empirical functional brain networks. We used empirical connectivity data derived from diffusion spectrum and resting-state functional magnetic resonance imaging data from healthy individuals. Empirical and simulated functional networks, constrained by structural connectivity, were defined based on 66 brain anatomical regions (nodes). Simulated functional data were generated using the Kuramoto model in which each anatomical region acts as a phase oscillator. Network topology was studied using graph theory in the empirical and simulated data. The difference (relative error) between graph theory measures derived from empirical and simulated data was then estimated. We found that simulated data can be used with confidence to model graph measures of global network organization at different dynamic states and highlight the sensitive dependence of the solutions obtained in simulated data on the specified connection densities. This study provides a method for the quantitative evaluation and external validation of graph theory metrics derived from simulated data that can be used to inform future study designs. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
The Social Cognitive Model of Job Satisfaction among Teachers: Testing and Validation
ERIC Educational Resources Information Center
Badri, Masood A.; Mohaidat, Jihad; Ferrandino, Vincent; El Mourad, Tarek
2013-01-01
The study empirically tests an integrative model of work satisfaction (0280, 0140, 0300 and 0255) in a sample of 5,022 teachers in Abu Dhabi in the United Arab Emirates. The study provided more support for the Lent and Brown (2006) model. Results revealed that this model was a strong fit for the data and accounted for 82% of the variance in work…
Multistate modelling extended by behavioural rules: An application to migration.
Klabunde, Anna; Zinn, Sabine; Willekens, Frans; Leuchter, Matthias
2017-10-01
We propose to extend demographic multistate models by adding a behavioural element: behavioural rules explain intentions and thus transitions. Our framework is inspired by the Theory of Planned Behaviour. We exemplify our approach with a model of migration from Senegal to France. Model parameters are determined using empirical data where available. Parameters for which no empirical correspondence exists are determined by calibration. Age- and period-specific migration rates are used for model validation. Our approach adds to the toolkit of demographic projection by allowing for shocks and social influence, which alter behaviour in non-linear ways, while sticking to the general framework of multistate modelling. Our simulations yield that higher income growth in Senegal leads to higher emigration rates in the medium term, while a decrease in fertility yields lower emigration rates.
ERIC Educational Resources Information Center
Skinner, Ellen A.; Chi, Una
2012-01-01
Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as "active ingredients" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher- and…
ERIC Educational Resources Information Center
Tao, Yu-Hui; Yeh, C. Rosa; Hung, Kung Chin
2015-01-01
Several theoretical models have been constructed to determine the effects of buisness simulation games (BSGs) on learning performance. Although these models agree on the concept of learning-cycle effect, no empirical evidence supports the claim that the use of learning cycle activities with BSGs produces an effect on incremental gains in knowledge…
NASA Astrophysics Data System (ADS)
Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Boykin, Timothy; Klimeck, Gerhard
2014-03-01
Semi-empirical Tight Binding (TB) is known to be a scalable and accurate atomistic representation for electron transport for realistically extended nano-scaled semiconductor devices that might contain millions of atoms. In this paper, an environment-aware and transferable TB model suitable for electronic structure and transport simulations in technologically relevant metals, metallic alloys, metal nanostructures, and metallic interface systems are described. Part I of this paper describes the development and validation of the new TB model. The new model incorporates intra-atomic diagonal and off-diagonal elements for implicit self-consistency and greater transferability across bonding environments. The dependence of the on-site energies on strain has been obtained by appealing to the Moments Theorem that links closed electron paths in the system to energy moments of angular momentum resolved local density of states obtained ab initio. The model matches self-consistent density functional theory electronic structure results for bulk face centered cubic metals with and without strain, metallic alloys, metallic interfaces, and metallic nanostructures with high accuracy and can be used in predictive electronic structure and transport problems in metallic systems at realistically extended length scales.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Zheng, Kai; Fear, Kathleen; Chaffee, Bruce W; Zimmerman, Christopher R; Karls, Edward M; Gatwood, Justin D; Stevenson, James G; Pearlman, Mark D
2011-12-01
To develop a theoretically informed and empirically validated survey instrument for assessing prescribers' perception of computerized drug-drug interaction (DDI) alerts. The survey is grounded in the unified theory of acceptance and use of technology and an adapted accident causation model. Development of the instrument was also informed by a review of the extant literature on prescribers' attitude toward computerized medication safety alerts and common prescriber-provided reasons for overriding. To refine and validate the survey, we conducted a two-stage empirical validation study consisting of a pretest with a panel of domain experts followed by a field test among all eligible prescribers at our institution. The resulting survey instrument contains 28 questionnaire items assessing six theoretical dimensions: performance expectancy, effort expectancy, social influence, facilitating conditions, perceived fatigue, and perceived use behavior. Satisfactory results were obtained from the field validation; however, a few potential issues were also identified. We analyzed these issues accordingly and the results led to the final survey instrument as well as usage recommendations. High override rates of computerized medication safety alerts have been a prevalent problem. They are usually caused by, or manifested in, issues of poor end user acceptance. However, standardized research tools for assessing and understanding end users' perception are currently lacking, which inhibits knowledge accumulation and consequently forgoes improvement opportunities. The survey instrument presented in this paper may help fill this methodological gap. We developed and empirically validated a survey instrument that may be useful for future research on DDI alerts and other types of computerized medication safety alerts more generally.
NASA Astrophysics Data System (ADS)
Kartalov, Emil P.; Scherer, Axel; Quake, Stephen R.; Taylor, Clive R.; Anderson, W. French
2007-03-01
A systematic experimental study and theoretical modeling of the device physics of polydimethylsiloxane "pushdown" microfluidic valves are presented. The phase space is charted by 1587 dimension combinations and encompasses 45-295μm lateral dimensions, 16-39μm membrane thickness, and 1-28psi closing pressure. Three linear models are developed and tested against the empirical data, and then combined into a fourth-power-polynomial superposition. The experimentally validated final model offers a useful quantitative prediction for a valve's properties as a function of its dimensions. Typical valves (80-150μm width) are shown to behave like thin springs.
The Waterfall Model in Large-Scale Development
NASA Astrophysics Data System (ADS)
Petersen, Kai; Wohlin, Claes; Baca, Dejan
Waterfall development is still a widely used way of working in software development companies. Many problems have been reported related to the model. Commonly accepted problems are for example to cope with change and that defects all too often are detected too late in the software development process. However, many of the problems mentioned in literature are based on beliefs and experiences, and not on empirical evidence. To address this research gap, we compare the problems in literature with the results of a case study at Ericsson AB in Sweden, investigating issues in the waterfall model. The case study aims at validating or contradicting the beliefs of what the problems are in waterfall development through empirical research.
An Empirical Analysis of the Default Rate of Informal Lending—Evidence from Yiwu, China
NASA Astrophysics Data System (ADS)
Lu, Wei; Yu, Xiaobo; Du, Juan; Ji, Feng
This study empirically analyzes the underlying factors contributing to the default rate of informal lending. This paper adopts snowball sampling interview to collect data and uses the logistic regression model to explore the specific factors. The results of these analyses validate the explanation of how the informal lending differs from the commercial loan. Factors that contribute to the default rate have particular attributes, while sharing some similarities with commercial bank or FICO credit scoring Index. Finally, our concluding remarks draw some inferences from empirical analysis and speculate as to what this may imply for the role of formal and informal financial sectors.
Analytical and Empirical Modeling of Wear and Forces of CBN Tool in Hard Turning - A Review
NASA Astrophysics Data System (ADS)
Patel, Vallabh Dahyabhai; Gandhi, Anishkumar Hasmukhlal
2017-08-01
Machining of steel material having hardness above 45 HRC (Hardness-Rockwell C) is referred as a hard turning. There are numerous models which should be scrutinized and implemented to gain optimum performance of hard turning. Various models in hard turning by cubic boron nitride tool have been reviewed, in attempt to utilize appropriate empirical and analytical models. Validation of steady state flank and crater wear model, Usui's wear model, forces due to oblique cutting theory, extended Lee and Shaffer's force model, chip formation and progressive flank wear have been depicted in this review paper. Effort has been made to understand the relationship between tool wear and tool force based on the different cutting conditions and tool geometries so that appropriate model can be used according to user requirement in hard turning.
Empirical flow parameters - a tool for hydraulic model validity assessment.
DOT National Transportation Integrated Search
2013-08-01
Data in Texas from the U.S. Geological Survey (USGS) physical stream flow and channel property measurements for gaging stations in the state of Texas were used to construct relations between observed stream flow, topographic slope, mean section veloc...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busuioc, A.; Storch, H. von; Schnur, R.
Empirical downscaling procedures relate large-scale atmospheric features with local features such as station rainfall in order to facilitate local scenarios of climate change. The purpose of the present paper is twofold: first, a downscaling technique is used as a diagnostic tool to verify the performance of climate models on the regional scale; second, a technique is proposed for verifying the validity of empirical downscaling procedures in climate change applications. The case considered is regional seasonal precipitation in Romania. The downscaling model is a regression based on canonical correlation analysis between observed station precipitation and European-scale sea level pressure (SLP). Themore » climate models considered here are the T21 and T42 versions of the Hamburg ECHAM3 atmospheric GCM run in time-slice mode. The climate change scenario refers to the expected time of doubled carbon dioxide concentrations around the year 2050. Generally, applications of statistical downscaling to climate change scenarios have been based on the assumption that the empirical link between the large-scale and regional parameters remains valid under a changed climate. In this study, a rationale is proposed for this assumption by showing the consistency of the 2 x CO{sub 2} GCM scenarios in winter, derived directly from the gridpoint data, with the regional scenarios obtained through empirical downscaling. Since the skill of the GCMs in regional terms is already established, it is concluded that the downscaling technique is adequate for describing climatically changing regional and local conditions, at least for precipitation in Romania during winter.« less
Propagation and Directional Scattering of Ocean Waves in the Marginal Ice Zone and Neighboring Seas
2015-09-30
expected to be the average of the kernel for 10 s and 12 s. This means that we should be able to calculate empirical formulas for 2 the scattering kernel...floe packing. Thus, establish a way to incorporate what has been done by Squire and co-workers into the wave model paradigm (in which the phase of the...cases observed by Kohout et al. (2014) in Antarctica . vii. Validation: We are planning validation tests for wave-ice scattering / attenuation model by
Sørensen, Hans Eibe; Slater, Stanley F
2008-08-01
Atheoretical measure purification may lead to construct deficient measures. The purpose of this paper is to provide a theoretically driven procedure for the development and empirical validation of symmetric component measures of multidimensional constructs. Particular emphasis is placed on establishing a formalized three-step procedure for achieving a posteriori content validity. Then the procedure is applied to development and empirical validation of two symmetrical component measures of market orientation, customer orientation and competitor orientation. Analysis suggests that average variance extracted is particularly critical to reliability in the respecification of multi-indicator measures. In relation to this, the results also identify possible deficiencies in using Cronbach alpha for establishing reliable and valid measures.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Schifer, Nicholas A.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.
Formulation, Implementation and Validation of a Two-Fluid model in a Fuel Cell CFD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jain, Kunal; Cole, J. Vernon; Kumar, Sanjiv
2008-12-01
Water management is one of the main challenges in PEM Fuel Cells. While water is essential for membrane electrical conductivity, excess liquid water leads to flooding of catalyst layers. Despite the fact that accurate prediction of two-phase transport is key for optimal water management, understanding of the two-phase transport in fuel cells is relatively poor. Wang et. al. have studied the two-phase transport in the channel and diffusion layer separately using a multiphase mixture model. The model fails to accurately predict saturation values for high humidity inlet streams. Nguyen et. al. developed a two-dimensional, two-phase, isothermal, isobaric, steady state modelmore » of the catalyst and gas diffusion layers. The model neglects any liquid in the channel. Djilali et. al. developed a three-dimensional two-phase multicomponent model. The model is an improvement over previous models, but neglects drag between the liquid and the gas phases in the channel. In this work, we present a comprehensive two-fluid model relevant to fuel cells. Models for two-phase transport through Channel, Gas Diffusion Layer (GDL) and Channel-GDL interface, are discussed. In the channel, the gas and liquid pressures are assumed to be same. The surface tension effects in the channel are incorporated using the continuum surface force (CSF) model. The force at the surface is expressed as a volumetric body force and added as a source to the momentum equation. In the GDL, the gas and liquid are assumed to be at different pressures. The difference in the pressures (capillary pressure) is calculated using an empirical correlations. At the Channel-GDL interface, the wall adhesion affects need to be taken into account. SIMPLE-type methods recast the continuity equation into a pressure-correction equation, the solution of which then provides corrections for velocities and pressures. However, in the two-fluid model, the presence of two phasic continuity equations gives more freedom and more complications. A general approach would be to form a mixture continuity equation by linearly combining the phasic continuity equations using appropriate weighting factors. Analogous to mixture equation for pressure correction, a difference equation is used for the volume/phase fraction by taking the difference between the phasic continuity equations. The relative advantages of the above mentioned algorithmic variants for computing pressure correction and volume fractions are discussed and quantitatively assessed. Preliminary model validation is done for each component of the fuel cell. The two-phase transport in the channel is validated using empirical correlations. Transport in the GDL is validated against results obtained from LBM and VOF simulation techniques. The Channel-GDL interface transport will be validated against experiment and empirical correlation of droplet detachment at the interface.« less
Impact of AMS-02 Measurements on Reducing GCR Model Uncertainties
NASA Technical Reports Server (NTRS)
Slaba, T. C.; O'Neill, P. M.; Golge, S.; Norbury, J. W.
2015-01-01
For vehicle design, shield optimization, mission planning, and astronaut risk assessment, the exposure from galactic cosmic rays (GCR) poses a significant and complex problem both in low Earth orbit and in deep space. To address this problem, various computational tools have been developed to quantify the exposure and risk in a wide range of scenarios. Generally, the tool used to describe the ambient GCR environment provides the input into subsequent computational tools and is therefore a critical component of end-to-end procedures. Over the past few years, several researchers have independently and very carefully compared some of the widely used GCR models to more rigorously characterize model differences and quantify uncertainties. All of the GCR models studied rely heavily on calibrating to available near-Earth measurements of GCR particle energy spectra, typically over restricted energy regions and short time periods. In this work, we first review recent sensitivity studies quantifying the ions and energies in the ambient GCR environment of greatest importance to exposure quantities behind shielding. Currently available measurements used to calibrate and validate GCR models are also summarized within this context. It is shown that the AMS-II measurements will fill a critically important gap in the measurement database. The emergence of AMS-II measurements also provides a unique opportunity to validate existing models against measurements that were not used to calibrate free parameters in the empirical descriptions. Discussion is given regarding rigorous approaches to implement the independent validation efforts, followed by recalibration of empirical parameters.
NASA Technical Reports Server (NTRS)
Pliutau, Denis; Prasad, Narasimha S
2013-01-01
Studies were performed to carry out semi-empirical validation of a new measurement approach we propose for molecular mixing ratios determination. The approach is based on relative measurements in bands of O2 and other molecules and as such may be best described as cross band relative absorption (CoBRA). . The current validation studies rely upon well verified and established theoretical and experimental databases, satellite data assimilations and modeling codes such as HITRAN, line-by-line radiative transfer model (LBLRTM), and the modern-era retrospective analysis for research and applications (MERRA). The approach holds promise for atmospheric mixing ratio measurements of CO2 and a variety of other molecules currently under investigation for several future satellite lidar missions. One of the advantages of the method is a significant reduction of the temperature sensitivity uncertainties which is illustrated with application to the ASCENDS mission for the measurement of CO2 mixing ratios (XCO2). Additional advantages of the method include the possibility to closely match cross-band weighting function combinations which is harder to achieve using conventional differential absorption techniques and the potential for additional corrections for water vapor and other interferences without using the data from numerical weather prediction (NWP) models.
Toward the Development and Validation of a Career Coach Competency Model
ERIC Educational Resources Information Center
Hatala, John-Paul; Hisey, Lee
2011-01-01
The career coaching profession is a dynamic field that has grown over the last decade. However, there exists a limitation to this field's development, as there is no universally accepted definition or empirically based competencies. There were three phases to the study. In the first phase, a conceptual model was developed that highlights four…
ERIC Educational Resources Information Center
Forbes, Cory T.; Zangori, Laura; Schwarz, Christina V.
2015-01-01
Water is a crucial topic that spans the K-12 science curriculum, including the elementary grades. Students should engage in the articulation, negotiation, and revision of model-based explanations about hydrologic phenomena. However, past research has shown that students, particularly early learners, often struggle to understand hydrologic…
Factors Affecting the Effectiveness and Use of Moodle: Students' Perception
ERIC Educational Resources Information Center
Damnjanovic, Vesna; Jednak, Sandra; Mijatovic, Ivana
2015-01-01
The purpose of this research paper is to identify the factors affecting the effectiveness of Moodle from the students' perspective. The research hypotheses derived from the suggested extended Seddon model have been empirically validated using the responses to a survey on e-learning usage among 255 users. We tested the model across higher education…
ERIC Educational Resources Information Center
Manolis, Chris; Burns, David J.; Assudani, Rashmi; Chinta, Ravi
2013-01-01
To understand experiential learning, many have reiterated the need to be able to identify students' learning styles. Kolb's Learning Style Model is the most widely accepted learning style model and has received a substantial amount of empirical support. Kolb's Learning Style Inventory (LSI), although one of the most widely utilized instruments to…
Three-Level Analysis of Single-Case Experimental Data: Empirical Validation
ERIC Educational Resources Information Center
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Beretvas, S. Natasha; Van den Noortgate, Wim
2014-01-01
One approach for combining single-case data involves use of multilevel modeling. In this article, the authors use a Monte Carlo simulation study to inform applied researchers under which realistic conditions the three-level model is appropriate. The authors vary the value of the immediate treatment effect and the treatment's effect on the time…
Andrew J. Shirk; Michael A. Schroeder; Leslie A. Robb; Samuel A. Cushman
2015-01-01
The ability of landscapes to impede speciesâ movement or gene flow may be quantified by resistance models. Few studies have assessed the performance of resistance models parameterized by expert opinion. In addition, resistance models differ in terms of spatial and thematic resolution as well as their focus on the ecology of a particular species or more generally on the...
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
Our study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Furthermore, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ~2°, than those from the three empirical models with averaged errors > ~5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. Finally, this study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Validation and calibration of structural models that combine information from multiple sources.
Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A
2017-02-01
Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.
Construct Validity of the Autism Impact Measure (AIM).
Mazurek, Micah O; Carlson, Coleen; Baker-Ericzén, Mary; Butter, Eric; Norris, Megan; Kanne, Stephen
2018-01-17
The Autism Impact Measure (AIM) was designed to track incremental change in frequency and impact of core ASD symptoms. The current study examined the structural and convergent validity of the AIM in a large sample of children with ASD. The results of a series of exploratory and confirmatory factor analyses yielded a final model with five theoretically and empirically meaningful subdomains: Repetitive Behavior, Atypical Behavior, Communication, Social Reciprocity, and Peer Interaction. The final model showed very good fit both overall and for each of the five factors, indicating excellent structural validity. AIM subdomain scores were significantly correlated with measures of similar constructs across all five domains. The results provide further support for the psychometric properties of the AIM.
NASA Astrophysics Data System (ADS)
Gordeev, E.; Sergeev, V.; Honkonen, I.; Kuznetsova, M.; Rastätter, L.; Palmroth, M.; Janhunen, P.; Tóth, G.; Lyon, J.; Wiltberger, M.
2015-12-01
Global magnetohydrodynamic (MHD) modeling is a powerful tool in space weather research and predictions. There are several advanced and still developing global MHD (GMHD) models that are publicly available via Community Coordinated Modeling Center's (CCMC) Run on Request system, which allows the users to simulate the magnetospheric response to different solar wind conditions including extraordinary events, like geomagnetic storms. Systematic validation of GMHD models against observations still continues to be a challenge, as well as comparative benchmarking of different models against each other. In this paper we describe and test a new approach in which (i) a set of critical large-scale system parameters is explored/tested, which are produced by (ii) specially designed set of computer runs to simulate realistic statistical distributions of critical solar wind parameters and are compared to (iii) observation-based empirical relationships for these parameters. Being tested in approximately similar conditions (similar inputs, comparable grid resolution, etc.), the four models publicly available at the CCMC predict rather well the absolute values and variations of those key parameters (magnetospheric size, magnetic field, and pressure) which are directly related to the large-scale magnetospheric equilibrium in the outer magnetosphere, for which the MHD is supposed to be a valid approach. At the same time, the models have systematic differences in other parameters, being especially different in predicting the global convection rate, total field-aligned current, and magnetic flux loading into the magnetotail after the north-south interplanetary magnetic field turning. According to validation results, none of the models emerges as an absolute leader. The new approach suggested for the evaluation of the models performance against reality may be used by model users while planning their investigations, as well as by model developers and those interesting to quantitatively evaluate progress in magnetospheric modeling.
Ahmed, Tamer; Filiatrault, Johanne; Yu, Hsiu-Ting; Zunzunegui, Maria Victoria
2017-01-01
Abstract Purpose: Active aging is a concept that lacks consensus. The WHO defines it as a holistic concept that encompasses the overall health, participation, and security of older adults. Fernández-Ballesteros and colleagues propose a similar concept but omit security and include mood and cognitive function. To date, researchers attempting to validate conceptual models of active aging have obtained mixed results. The goal of this study was to examine the validity of existing models of active aging with epidemiological data from Canada. Methods: The WHO model of active aging and the psychological model of active aging developed by Fernández-Ballesteros and colleagues were tested with confirmatory factor analysis. The data used included 799 community-dwelling older adults between 65 and 74 years old, recruited from the patient lists of family physicians in Saint-Hyacinthe, Quebec and Kingston, Ontario. Results: Neither model could be validated in the sample of Canadian older adults. Although a concept of healthy aging can be modeled adequately, social participation and security did not fit a latent factor model. A simple binary index indicated that 27% of older adults in the sample did not meet the active aging criteria proposed by the WHO. Implications: Our results suggest that active aging might represent a human rights policy orientation rather than an empirical measurement tool to guide research among older adult populations. Binary indexes of active aging may serve to highlight what remains to be improved about the health, participation, and security of growing populations of older adults. PMID:26350153
Tabung, Fred K.; Wang, Weike; Fung, Teresa T.; Hu, Frank B.; Smith-Warner, Stephanie A.; Chavarro, Jorge E.; Fuchs, Charles S.; Willett, Walter C.; Giovannucci, Edward L.
2017-01-01
The glycemic and insulin indices assess postprandial glycemic and insulin response to foods respectively, which may not reflect the long-term effects of diet on insulin response. We developed and evaluated the validity of four empirical indices to assess the insulinemic potential of usual diets and lifestyles, using dietary, lifestyle and biomarker data from the Nurses’ Health Study (NHS, n=5,812 for hyperinsulinemia, n=3,929 for insulin resistance). The four indices were: the empirical dietary index for hyperinsulinemia (EDIH) and empirical lifestyle index for hyperinsulinemia (ELIH); empirical dietary index for insulin resistance (EDIR) and empirical lifestyle index for insulin resistance (ELIR). We entered 39 food frequency questionnaire-derived food groups in stepwise linear regression models and defined indices as the patterns most predictive of fasting plasma C-peptide, for the hyperinsulinemia pathway (EDIH and ELIH); and of the triglyceride/high density lipoprotein-cholesterol (TG/HDL) ratio, for the insulin resistance pathway (EDIR and ELIR). We evaluated the validity of indices in two independent samples from NHS-II and Health Professionals Follow-up Study (HPFS) using multivariable-adjusted linear regression analyses to calculate relative concentrations of biomarkers. EDIH is comprised of 18 food groups; 13 were positively associated with C-peptide, five inversely. EDIR is comprised of 18 food groups; ten were positively associated with TG/HDL and eight inversely. Lifestyle indices had fewer dietary components, and included BMI and physical activity as components. In the validation samples, all indices significantly predicted biomarker concentrations, e.g., the relative concentrations (95%CI) of the corresponding biomarkers comparing extreme index quintiles in HPFS were: EDIH, 1.29(1.22, 1.37); ELIH, 1.78(1.68, 1.88); EDIR, 1.44(1.34, 1.55); ELIR, 2.03(1.89, 2.19); all P-trend<0.0001. The robust associations of these novel hypothesis-driven indices with insulin response biomarker concentrations suggests their usefulness in assessing the ability of whole diets and lifestyles to stimulate and/or sustain insulin secretion. PMID:27821188
A computational continuum model of poroelastic beds
Zampogna, G. A.
2017-01-01
Despite the ubiquity of fluid flows interacting with porous and elastic materials, we lack a validated non-empirical macroscale method for characterizing the flow over and through a poroelastic medium. We propose a computational tool to describe such configurations by deriving and validating a continuum model for the poroelastic bed and its interface with the above free fluid. We show that, using stress continuity condition and slip velocity condition at the interface, the effective model captures the effects of small changes in the microstructure anisotropy correctly and predicts the overall behaviour in a physically consistent and controllable manner. Moreover, we show that the performance of the effective model is accurate by validating with fully microscopic resolved simulations. The proposed computational tool can be used in investigations in a wide range of fields, including mechanical engineering, bio-engineering and geophysics. PMID:28413355
A new simple local muscle recovery model and its theoretical and experimental validation.
Ma, Liang; Zhang, Wei; Wu, Su; Zhang, Zhanwu
2015-01-01
This study was conducted to provide theoretical and experimental validation of a local muscle recovery model. Muscle recovery has been modeled in different empirical and theoretical approaches to determine work-rest allowance for musculoskeletal disorder (MSD) prevention. However, time-related parameters and individual attributes have not been sufficiently considered in conventional approaches. A new muscle recovery model was proposed by integrating time-related task parameters and individual attributes. Theoretically, this muscle recovery model was compared to other theoretical models mathematically. Experimentally, a total of 20 subjects participated in the experimental validation. Hand grip force recovery and shoulder joint strength recovery were measured after a fatiguing operation. The recovery profile was fitted by using the recovery model, and individual recovery rates were calculated as well after fitting. Good fitting values (r(2) > .8) were found for all the subjects. Significant differences in recovery rates were found among different muscle groups (p < .05). The theoretical muscle recovery model was primarily validated by characterization of the recovery process after fatiguing operation. The determined recovery rate may be useful to represent individual recovery attribute.
Body Topography Parcellates Human Sensory and Motor Cortex.
Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S
2017-07-01
The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.
The influence of service quality and patients' emotions on satisfaction.
Vinagre, Maria Helena; Neves, José
2008-01-01
The purpose of this research is to develop and empirically test a model to examine the major factors affecting patients' satisfaction that depict and estimate the relationships between service quality, patient's emotions, expectations and involvement. The approach was tested using structural equation modeling, with a sample of 317 patients from six Portuguese public healthcare centres, using a revised SERVQUAL scale for service quality evaluation and an adapted DESII scale for assessing patient emotions. The scales used to evaluate service quality and emotional experience appears valid. The results support process complexity that leads to health service satisfaction, which involves diverse phenomena within the cognitive and emotional domain, revealing that all the predictors have a significant effect on satisfaction. The emotions inventory, although showing good internal consistency, might be enlarged to other typologies in further research--needed to confirm these findings. Patient's satisfaction mechanisms are important for improving service quality. The research shows empirical evidence about the effect of both patient's emotions and service quality on satisfaction with healthcare services. Findings also provide a model that includes valid and reliable measures.
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Saha, Sankalita; Goebel, Kai
2011-01-01
Accelerated aging methodologies for electrolytic components have been designed and accelerated aging experiments have been carried out. The methodology is based on imposing electrical and/or thermal overstresses via electrical power cycling in order to mimic the real world operation behavior. Data are collected in-situ and offline in order to periodically characterize the devices' electrical performance as it ages. The data generated through these experiments are meant to provide capability for the validation of prognostic algorithms (both model-based and data-driven). Furthermore, the data allow validation of physics-based and empirical based degradation models for this type of capacitor. A first set of models and algorithms has been designed and tested on the data.
Approaches to Validation of Models for Low Gravity Fluid Behavior
NASA Technical Reports Server (NTRS)
Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad
2005-01-01
This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.
Validation of Model Forecasts of the Ambient Solar Wind
NASA Technical Reports Server (NTRS)
Macneice, P. J.; Hesse, M.; Kuznetsova, M. M.; Rastaetter, L.; Taktakishvili, A.
2009-01-01
Independent and automated validation is a vital step in the progression of models from the research community into operational forecasting use. In this paper we describe a program in development at the CCMC to provide just such a comprehensive validation for models of the ambient solar wind in the inner heliosphere. We have built upon previous efforts published in the community, sharpened their definitions, and completed a baseline study. We also provide first results from this program of the comparative performance of the MHD models available at the CCMC against that of the Wang-Sheeley-Arge (WSA) model. An important goal of this effort is to provide a consistent validation to all available models. Clearly exposing the relative strengths and weaknesses of the different models will enable forecasters to craft more reliable ensemble forecasting strategies. Models of the ambient solar wind are developing rapidly as a result of improvements in data supply, numerical techniques, and computing resources. It is anticipated that in the next five to ten years, the MHD based models will supplant semi-empirical potential based models such as the WSA model, as the best available forecast models. We anticipate that this validation effort will track this evolution and so assist policy makers in gauging the value of past and future investment in modeling support.
Cross-validation to select Bayesian hierarchical models in phylogenetics.
Duchêne, Sebastián; Duchêne, David A; Di Giallonardo, Francesca; Eden, John-Sebastian; Geoghegan, Jemma L; Holt, Kathryn E; Ho, Simon Y W; Holmes, Edward C
2016-05-26
Recent developments in Bayesian phylogenetic models have increased the range of inferences that can be drawn from molecular sequence data. Accordingly, model selection has become an important component of phylogenetic analysis. Methods of model selection generally consider the likelihood of the data under the model in question. In the context of Bayesian phylogenetics, the most common approach involves estimating the marginal likelihood, which is typically done by integrating the likelihood across model parameters, weighted by the prior. Although this method is accurate, it is sensitive to the presence of improper priors. We explored an alternative approach based on cross-validation that is widely used in evolutionary analysis. This involves comparing models according to their predictive performance. We analysed simulated data and a range of viral and bacterial data sets using a cross-validation approach to compare a variety of molecular clock and demographic models. Our results show that cross-validation can be effective in distinguishing between strict- and relaxed-clock models and in identifying demographic models that allow growth in population size over time. In most of our empirical data analyses, the model selected using cross-validation was able to match that selected using marginal-likelihood estimation. The accuracy of cross-validation appears to improve with longer sequence data, particularly when distinguishing between relaxed-clock models. Cross-validation is a useful method for Bayesian phylogenetic model selection. This method can be readily implemented even when considering complex models where selecting an appropriate prior for all parameters may be difficult.
Olsen, S O
2001-04-01
A theoretical model of involvement in consumption of food products was tested in a representative survey of Norwegian households for the particular case of consuming seafood as a common family meal. The empirical study is based on using structural equation approach to test construct validity of measures and the empirical fit of the theoretical model. Attitudes, negative feelings, social norms and moral obligation were proved to be important, reliable and different constructs and explained 63% of the variation in seafood involvement. Negative feelings and moral obligation was the most important antecedents of involvement. Both our proposed model and modified model with seafood involvement as a mediator fit well with the data and proved our expectations in a promising way. Copyright 2001 Academic Press.
Positive Psychology versus the Medical Model?: Comment
ERIC Educational Resources Information Center
Joseph, Stephen; Linley, P. Alex
2006-01-01
Comments on "Positive psychology progress: Empirical validation of interventions" by Seligman, Steen, Park, and Peterson (see record 2005-08033-003). Seligman and colleagues provided a progress report on positive psychology, reviewing the impressive developments over the past five years. We wholeheartedly support the positive psychology movement…
Terrorism as a process: a critical review of Moghaddam's "Staircase to Terrorism".
Lygre, Ragnhild B; Eid, Jarle; Larsson, Gerry; Ranstorp, Magnus
2011-12-01
This study reviews empirical evidence for Moghaddam's model "Staircase to Terrorism," which portrays terrorism as a process of six consecutive steps culminating in terrorism. An extensive literature search, where 2,564 publications on terrorism were screened, resulted in 38 articles which were subject to further analysis. The results showed that while most of the theories and processes linked to Moghaddam's model are supported by empirical evidence, the proposed transitions between the different steps are not. These results may question the validity of a linear stepwise model and may suggest that a combination of mechanisms/factors could combine in different ways to produce terrorism. © 2011 The Authors. Scandinavian Journal of Psychology © 2011 The Scandinavian Psychological Associations.
The Gaussian copula model for the joint deficit index for droughts
NASA Astrophysics Data System (ADS)
Van de Vyver, H.; Van den Bergh, J.
2018-06-01
The characterization of droughts and their impacts is very dependent on the time scale that is involved. In order to obtain an overall drought assessment, the cumulative effects of water deficits over different times need to be examined together. For example, the recently developed joint deficit index (JDI) is based on multivariate probabilities of precipitation over various time scales from 1- to 12-months, and was constructed from empirical copulas. In this paper, we examine the Gaussian copula model for the JDI. We model the covariance across the temporal scales with a two-parameter function that is commonly used in the specific context of spatial statistics or geostatistics. The validity of the covariance models is demonstrated with long-term precipitation series. Bootstrap experiments indicate that the Gaussian copula model has advantages over the empirical copula method in the context of drought severity assessment: (i) it is able to quantify droughts outside the range of the empirical copula, (ii) provides adequate drought quantification, and (iii) provides a better understanding of the uncertainty in the estimation.
NASA Astrophysics Data System (ADS)
Wu, Qing; Luu, Quang-Hung; Tkalich, Pavel; Chen, Ge
2018-04-01
Having great impacts on human lives, global warming and associated sea level rise are believed to be strongly linked to anthropogenic causes. Statistical approach offers a simple and yet conceptually verifiable combination of remotely connected climate variables and indices, including sea level and surface temperature. We propose an improved statistical reconstruction model based on the empirical dynamic control system by taking into account the climate variability and deriving parameters from Monte Carlo cross-validation random experiments. For the historic data from 1880 to 2001, we yielded higher correlation results compared to those from other dynamic empirical models. The averaged root mean square errors are reduced in both reconstructed fields, namely, the global mean surface temperature (by 24-37%) and the global mean sea level (by 5-25%). Our model is also more robust as it notably diminished the unstable problem associated with varying initial values. Such results suggest that the model not only enhances significantly the global mean reconstructions of temperature and sea level but also may have a potential to improve future projections.
Malthusian dynamics in a diverging Europe: Northern Italy, 1650-1881.
Fernihough, Alan
2013-02-01
Recent empirical research questions the validity of using Malthusian theory in preindustrial England. Using real wage and vital rate data for the years 1650-1881, I provide empirical estimates for a different region: Northern Italy. The empirical methodology is theoretically underpinned by a simple Malthusian model, in which population, real wages, and vital rates are determined endogenously. My findings strongly support the existence of a Malthusian economy wherein population growth decreased living standards, which in turn influenced vital rates. However, these results also demonstrate how the system is best characterized as one of weak homeostasis. Furthermore, there is no evidence of Boserupian effects given that increases in population failed to spur any sustained technological progress.
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
2012-01-01
Background Patient Safety Indicators (PSI) are being modestly used in Spain, somewhat due to concerns on their empirical properties. This paper provides evidence by answering three questions: a) Are PSI differences across hospitals systematic -rather than random?; b) Do PSI measure differences among hospital-providers -as opposed to differences among patients?; and, c) Are measurements able to detect hospitals with a higher than "expected" number of cases? Methods An empirical validation study on administrative data was carried out. All 2005 and 2006 publicly-funded hospital discharges were used to retrieve eligible cases of five PSI: Death in low-mortality DRGs (MLM); decubitus ulcer (DU); postoperative pulmonary embolism or deep-vein thrombosis (PE-DVT); catheter-related infections (CRI), and postoperative sepsis (PS). Empirical Bayes statistic (EB) was used to estimate whether the variation was systematic; logistic-multilevel modelling determined what proportion of the variation was explained by the hospital; and, shrunken residuals, as provided by multilevel modelling, were plotted to flag hospitals performing worse than expected. Results Variation across hospitals was observed to be systematic in all indicators, with EB values ranging from 0.19 (CI95%:0.12 to 0.28) in PE-DVT to 0.34 (CI95%:0.25 to 0.45) in DU. A significant proportion of the variance was explained by the hospital, once patient case-mix was adjusted: from a 6% in MLM (CI95%:3% to 11%) to a 24% (CI95%:20% to 30%) in CRI. All PSI were able to flag hospitals with rates over the expected, although this capacity decreased when the largest hospitals were analysed. Conclusion Five PSI showed reasonable empirical properties to screen healthcare performance in Spanish hospitals, particularly in the largest ones. PMID:22369291
ERIC Educational Resources Information Center
Rupp, Andre A.
2012-01-01
In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled: (1) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables; and (2) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale…
ERIC Educational Resources Information Center
Lynam, Donald R.; Gaughan, Eric T.; Miller, Joshua D.; Miller, Drew J.; Mullins-Sweatt, Stephanie; Widiger, Thomas A.
2011-01-01
A new self-report assessment of the basic traits of psychopathy was developed with a general trait model of personality (five-factor model [FFM]) as a framework. Scales were written to assess maladaptive variants of the 18 FFM traits that are robustly related to psychopathy across a variety of perspectives including empirical correlations, expert…
The use of analytical models in human-computer interface design
NASA Technical Reports Server (NTRS)
Gugerty, Leo
1991-01-01
Some of the many analytical models in human-computer interface design that are currently being developed are described. The usefulness of analytical models for human-computer interface design is evaluated. Can the use of analytical models be recommended to interface designers? The answer, based on the empirical research summarized here, is: not at this time. There are too many unanswered questions concerning the validity of models and their ability to meet the practical needs of design organizations.
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Construct validity of the Moral Development Scale for Professionals (MDSP)
Söderhamn, Olle; Bjørnestad, John Olav; Skisland, Anne; Cliffordson, Christina
2011-01-01
The aim of this study was to investigate the construct validity of the Moral Development Scale for Professionals (MDSP) using structural equation modeling. The instrument is a 12-item self-report instrument, developed in the Scandinavian cultural context and based on Kohlberg’s theory. A hypothesized simplex structure model underlying the MDSP was tested through structural equation modeling. Validity was also tested as the proportion of respondents older than 20 years that reached the highest moral level, which according to the theory should be small. A convenience sample of 339 nursing students with a mean age of 25.3 years participated. Results confirmed the simplex model structure, indicating that MDSP reflects a moral construct empirically organized from low to high. A minority of respondents >20 years of age (13.5%) scored more than 80% on the highest moral level. The findings support the construct validity of the MDSP and the stages and levels in Kohlberg’s theory. PMID:21655343
Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics
NASA Technical Reports Server (NTRS)
Olson, Erik D.
2015-01-01
This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.
Limits of Predictability in Commuting Flows in the Absence of Data for Calibration
Yang, Yingxiang; Herrera, Carlos; Eagle, Nathan; González, Marta C.
2014-01-01
The estimation of commuting flows at different spatial scales is a fundamental problem for different areas of study. Many current methods rely on parameters requiring calibration from empirical trip volumes. Their values are often not generalizable to cases without calibration data. To solve this problem we develop a statistical expression to calculate commuting trips with a quantitative functional form to estimate the model parameter when empirical trip data is not available. We calculate commuting trip volumes at scales from within a city to an entire country, introducing a scaling parameter α to the recently proposed parameter free radiation model. The model requires only widely available population and facility density distributions. The parameter can be interpreted as the influence of the region scale and the degree of heterogeneity in the facility distribution. We explore in detail the scaling limitations of this problem, namely under which conditions the proposed model can be applied without trip data for calibration. On the other hand, when empirical trip data is available, we show that the proposed model's estimation accuracy is as good as other existing models. We validated the model in different regions in the U.S., then successfully applied it in three different countries. PMID:25012599
NASA Technical Reports Server (NTRS)
Mannino, Antonio
2008-01-01
Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and processing of seawater samples for biogeochemical (pigments, DOC and POC) and optical (CDOM and POM absorption coefficients) analyses to enhance our understanding of the linkages between in-water optical measurements (IOPs and AOPs) and biogeochemical constituents and to provide a more comprehensive suite of validation products.
van der Heijden, A A W A; Feenstra, T L; Hoogenveen, R T; Niessen, L W; de Bruijne, M C; Dekker, J M; Baan, C A; Nijpels, G
2015-12-01
To test a simulation model, the MICADO model, for estimating the long-term effects of interventions in people with and without diabetes. The MICADO model includes micro- and macrovascular diseases in relation to their risk factors. The strengths of this model are its population scope and the possibility to assess parameter uncertainty using probabilistic sensitivity analyses. Outcomes include incidence and prevalence of complications, quality of life, costs and cost-effectiveness. We externally validated MICADO's estimates of micro- and macrovascular complications in a Dutch cohort with diabetes (n = 498,400) by comparing these estimates with national and international empirical data. For the annual number of people undergoing amputations, MICADO's estimate was 592 (95% interquantile range 291-842), which compared well with the registered number of people with diabetes-related amputations in the Netherlands (728). The incidence of end-stage renal disease estimated using the MICADO model was 247 people (95% interquartile range 120-363), which was also similar to the registered incidence in the Netherlands (277 people). MICADO performed well in the validation of macrovascular outcomes of population-based cohorts, while it had more difficulty in reflecting a highly selected trial population. Validation by comparison with independent empirical data showed that the MICADO model simulates the natural course of diabetes and its micro- and macrovascular complications well. As a population-based model, MICADO can be applied for projections as well as scenario analyses to evaluate the long-term (cost-)effectiveness of population-level interventions targeting diabetes and its complications in the Netherlands or similar countries. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010.
Jacobs, David E; Nevin, Rick
2006-11-01
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 million pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels > or =10 microg/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, David E.; Nevin, Rick
2006-11-15
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 millionmore » pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels {>=}10 {mu}g/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.« less
Jones, Rachael M; Simmons, Catherine; Boelter, Fred
2011-06-01
Drywall finishing is a dusty construction activity. We describe a mathematical model that predicts the time-weighted average concentration of respirable and total dusts in the personal breathing zone of the sander, and in the area surrounding joint compound sanding activities. The model represents spatial variation in dust concentrations using two-zones, and temporal variation using an exponential function. Interzone flux and the relationships between respirable and total dusts are described using empirical factors. For model evaluation, we measured dust concentrations in two field studies, including three workers from a commercial contracting crew, and one unskilled worker. Data from the field studies confirm that the model assumptions and parameterization are reasonable and thus validate the modeling approach. Predicted dust C(twa) were in concordance with measured values for the contracting crew, but under estimated measured values for the unskilled worker. Further characterization of skill-related exposure factors is indicated.
An Empirical Human Controller Model for Preview Tracking Tasks.
van der El, Kasper; Pool, Daan M; Damveld, Herman J; van Paassen, Marinus Rene M; Mulder, Max
2016-11-01
Real-life tracking tasks often show preview information to the human controller about the future track to follow. The effect of preview on manual control behavior is still relatively unknown. This paper proposes a generic operator model for preview tracking, empirically derived from experimental measurements. Conditions included pursuit tracking, i.e., without preview information, and tracking with 1 s of preview. Controlled element dynamics varied between gain, single integrator, and double integrator. The model is derived in the frequency domain, after application of a black-box system identification method based on Fourier coefficients. Parameter estimates are obtained to assess the validity of the model in both the time domain and frequency domain. Measured behavior in all evaluated conditions can be captured with the commonly used quasi-linear operator model for compensatory tracking, extended with two viewpoints of the previewed target. The derived model provides new insights into how human operators use preview information in tracking tasks.
Multimethod latent class analysis
Nussbeck, Fridtjof W.; Eid, Michael
2015-01-01
Correct and, hence, valid classifications of individuals are of high importance in the social sciences as these classifications are the basis for diagnoses and/or the assignment to a treatment. The via regia to inspect the validity of psychological ratings is the multitrait-multimethod (MTMM) approach. First, a latent variable model for the analysis of rater agreement (latent rater agreement model) will be presented that allows for the analysis of convergent validity between different measurement approaches (e.g., raters). Models of rater agreement are transferred to the level of latent variables. Second, the latent rater agreement model will be extended to a more informative MTMM latent class model. This model allows for estimating (i) the convergence of ratings, (ii) method biases in terms of differential latent distributions of raters and differential associations of categorizations within raters (specific rater bias), and (iii) the distinguishability of categories indicating if categories are satisfyingly distinct from each other. Finally, an empirical application is presented to exemplify the interpretation of the MTMM latent class model. PMID:26441714
Are cross-cultural comparisons of norms on death anxiety valid?
Beshai, James A
2008-01-01
Cross-cultural comparisons of norms derived from research on Death Anxiety are valid as long as they provide existential validity. Existential validity is not empirically derived like construct validity. It is an understanding of being human unto death. It is the realization that death is imminent. It is the inner sense that provides a responder to death anxiety scales with a valid expression of his or her sense about the prospect of dying. It can be articulated in a life review by a disclosure of one's ontology. This article calls upon psychologists who develop death anxiety scales to disclose their presuppositions about death before administering a questionnaire. By disclosing his or her ontology a psychologist provides a means of disclosing his or her intentionality in responding to the items. This humanistic paradigm allows for an interactive participation between investigator and subject. Lester, Templer, and Abdel-Khalek (2006-2007) enriched psychology with significant empirical data on several correlates of death anxiety. But all scientists, especially psychologists, will always have alternative interpretations of the same empirical fact pattern. Empirical data is limited by the affirmation of the consequent limitation. A phenomenology of language and communication makes existential validity a necessary step for a broader understanding of the meaning of death anxiety.
NASA Astrophysics Data System (ADS)
Kiafar, Hamed; Babazadeh, Hosssien; Marti, Pau; Kisi, Ozgur; Landeras, Gorka; Karimi, Sepideh; Shiri, Jalal
2017-10-01
Evapotranspiration estimation is of crucial importance in arid and hyper-arid regions, which suffer from water shortage, increasing dryness and heat. A modeling study is reported here to cross-station assessment between hyper-arid and humid conditions. The derived equations estimate ET0 values based on temperature-, radiation-, and mass transfer-based configurations. Using data from two meteorological stations in a hyper-arid region of Iran and two meteorological stations in a humid region of Spain, different local and cross-station approaches are applied for developing and validating the derived equations. The comparison of the gene expression programming (GEP)-based-derived equations with corresponding empirical-semi empirical ET0 estimation equations reveals the superiority of new formulas in comparison with the corresponding empirical equations. Therefore, the derived models can be successfully applied in these hyper-arid and humid regions as well as similar climatic contexts especially in data-lack situations. The results also show that when relying on proper input configurations, cross-station might be a promising alternative for locally trained models for the stations with data scarcity.
Task analysis exemplified: the process of resolving unfinished business.
Greenberg, L S; Foerster, F S
1996-06-01
The steps of a task-analytic research program designed to identify the in-session performances involved in resolving lingering bad feelings toward a significant other are described. A rational-empirical methodology of repeatedly cycling between rational conjecture and empirical observations is demonstrated as a method of developing an intervention manual and the components of client processes of resolution. A refined model of the change process developed by these procedures is validated by comparing 11 successful and 11 unsuccessful performances. Four performance components-intense expression of feeling, expression of need, shift in representation of other, and self-validation or understanding of the other-were found to discriminate between resolution and nonresolution performances. These components were measured on 4 process measures: the Structural Analysis of Social Behavior, the Experiencing Scale, the Client's Emotional Arousal Scale, and a need scale.
Bélanger, Emmanuelle; Ahmed, Tamer; Filiatrault, Johanne; Yu, Hsiu-Ting; Zunzunegui, Maria Victoria
2017-04-01
Active aging is a concept that lacks consensus. The WHO defines it as a holistic concept that encompasses the overall health, participation, and security of older adults. Fernández-Ballesteros and colleagues propose a similar concept but omit security and include mood and cognitive function. To date, researchers attempting to validate conceptual models of active aging have obtained mixed results. The goal of this study was to examine the validity of existing models of active aging with epidemiological data from Canada. The WHO model of active aging and the psychological model of active aging developed by Fernández-Ballesteros and colleagues were tested with confirmatory factor analysis. The data used included 799 community-dwelling older adults between 65 and 74 years old, recruited from the patient lists of family physicians in Saint-Hyacinthe, Quebec and Kingston, Ontario. Neither model could be validated in the sample of Canadian older adults. Although a concept of healthy aging can be modeled adequately, social participation and security did not fit a latent factor model. A simple binary index indicated that 27% of older adults in the sample did not meet the active aging criteria proposed by the WHO. Our results suggest that active aging might represent a human rights policy orientation rather than an empirical measurement tool to guide research among older adult populations. Binary indexes of active aging may serve to highlight what remains to be improved about the health, participation, and security of growing populations of older adults. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Study of the Performance of Aids to Navigation Systems - Phase 1, An Empirical Model Approach
1978-07-19
Pesch, .. L. /Masakasy, J. G. /Clark Di . A. /Atkins .-. S.... -------- 00o Document I available to the U. S. public through the National Technical...Document is available to the public through PILOTING, FIX, NAVIGATOR, PILOT, the National Technical Information Service, MONTE CARLO MODEL, SHIP SIMULATO...Validation of Entire Navigating and Steering 5-33 Model 5.5 Overview of Model Capabilities and Achieved Goals 5-33 vi SECTION TITLE PAGE 6 PLAN FOR
AlRawi, Sara N; Khidir, Amal; Elnashar, Maha S; Abdelrahim, Huda A; Killawi, Amal K; Hammoud, Maya M; Fetters, Michael D
2017-03-14
Evidence indicates traditional medicine is no longer only used for the healthcare of the poor, its prevalence is also increasing in countries where allopathic medicine is predominant in the healthcare system. While these healing practices have been utilized for thousands of years in the Arabian Gulf, only recently has a theoretical model been developed illustrating the linkages and components of such practices articulated as Traditional Arabic & Islamic Medicine (TAIM). Despite previous theoretical work presenting development of the TAIM model, empirical support has been lacking. The objective of this research is to provide empirical support for the TAIM model and illustrate real world applicability. Using an ethnographic approach, we recruited 84 individuals (43 women and 41 men) who were speakers of one of four common languages in Qatar; Arabic, English, Hindi, and Urdu, Through in-depth interviews, we sought confirming and disconfirming evidence of the model components, namely, health practices, beliefs and philosophy to treat, diagnose, and prevent illnesses and/or maintain well-being, as well as patterns of communication about their TAIM practices with their allopathic providers. Based on our analysis, we find empirical support for all elements of the TAIM model. Participants in this research, visitors to major healthcare centers, mentioned using all elements of the TAIM model: herbal medicines, spiritual therapies, dietary practices, mind-body methods, and manual techniques, applied singularly or in combination. Participants had varying levels of comfort sharing information about TAIM practices with allopathic practitioners. These findings confirm an empirical basis for the elements of the TAIM model. Three elements, namely, spiritual healing, herbal medicine, and dietary practices, were most commonly found. Future research should examine the prevalence of TAIM element use, how it differs among various populations, and its impact on health.
Multiple Roles: The Conflicted Realities of Community College Mission Statements
ERIC Educational Resources Information Center
Mrozinski, Mark D.
2010-01-01
Questions of efficacy have always plagued the use of mission statement as a strategic planning tool. In most planning models, the mission statement serves to clarify goals and guide the formation of strategies. However, little empirical evidence exists validating that mission statements actually improve the performance of organizations, even…
Improving Quality in Education: Dynamic Approaches to School Improvement
ERIC Educational Resources Information Center
Creemers, Bert P. M.; Kyriakides, Leonidas
2011-01-01
This book explores an approach to school improvement that merges the traditions of educational effectiveness research and school improvement efforts. It displays how the dynamic model, which is theoretical and empirically validated, can be used in both traditions. Each chapter integrates evidence from international and national studies, showing…
A calibration hierarchy for risk models was defined: from utopia to empirical data.
Van Calster, Ben; Nieboer, Daan; Vergouwe, Yvonne; De Cock, Bavo; Pencina, Michael J; Steyerberg, Ewout W
2016-06-01
Calibrated risk models are vital for valid decision support. We define four levels of calibration and describe implications for model development and external validation of predictions. We present results based on simulated data sets. A common definition of calibration is "having an event rate of R% among patients with a predicted risk of R%," which we refer to as "moderate calibration." Weaker forms of calibration only require the average predicted risk (mean calibration) or the average prediction effects (weak calibration) to be correct. "Strong calibration" requires that the event rate equals the predicted risk for every covariate pattern. This implies that the model is fully correct for the validation setting. We argue that this is unrealistic: the model type may be incorrect, the linear predictor is only asymptotically unbiased, and all nonlinear and interaction effects should be correctly modeled. In addition, we prove that moderate calibration guarantees nonharmful decision making. Finally, results indicate that a flexible assessment of calibration in small validation data sets is problematic. Strong calibration is desirable for individualized decision support but unrealistic and counter productive by stimulating the development of overly complex models. Model development and external validation should focus on moderate calibration. Copyright © 2016 Elsevier Inc. All rights reserved.
A simplified conjoint recognition paradigm for the measurement of gist and verbatim memory.
Stahl, Christoph; Klauer, Karl Christoph
2008-05-01
The distinction between verbatim and gist memory traces has furthered the understanding of numerous phenomena in various fields, such as false memory research, research on reasoning and decision making, and cognitive development. To measure verbatim and gist memory empirically, an experimental paradigm and multinomial measurement model has been proposed but rarely applied. In the present article, a simplified conjoint recognition paradigm and multinomial model is introduced and validated as a measurement tool for the separate assessment of verbatim and gist memory processes. A Bayesian metacognitive framework is applied to validate guessing processes. Extensions of the model toward incorporating the processes of phantom recollection and erroneous recollection rejection are discussed.
NASA Astrophysics Data System (ADS)
Nafsiati Astuti, Rini
2018-04-01
Argumentation skill is the ability to compose and maintain arguments consisting of claims, supports for evidence, and strengthened-reasons. Argumentation is an important skill student needs to face the challenges of globalization in the 21st century. It is not an ability that can be developed by itself along with the physical development of human, but it must be developed under nerve like process, giving stimulus so as to require a person to be able to argue. Therefore, teachers should develop students’ skill of arguing in science learning in the classroom. The purpose of this study is to obtain an innovative learning model that are valid in terms of content and construct in improving the skills of argumentation and concept understanding of junior high school students. The assessment of content validity and construct validity was done through Focus Group Discussion (FGD), using the content and construct validation sheet, book model, learning video, and a set of learning aids for one meeting. Assessment results from 3 (three) experts showed that the learning model developed in the category was valid. The validity itself shows that the developed learning model has met the content requirement, the student needs, state of the art, strong theoretical and empirical foundation and construct validity, which has a connection of syntax stages and components of learning model so that it can be applied in the classroom activities
ERIC Educational Resources Information Center
Steenbeek, Henderien; van Geert, Paul
2008-01-01
Studying short-term dynamic processes and change mechanisms in interaction yields important knowledge that contributes to understanding long-term social development of children. In order to get a grip on this short-term dynamics of interaction processes, the authors made a dynamic systems model of dyadic interaction of children during one play…
Exploring predictive performance: A reanalysis of the geospace model transition challenge
NASA Astrophysics Data System (ADS)
Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.
2017-01-01
The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.
Empirical potential for molecular simulation of graphene nanoplatelets
NASA Astrophysics Data System (ADS)
Bourque, Alexander J.; Rutledge, Gregory C.
2018-04-01
A new empirical potential for layered graphitic materials is reported. Interatomic interactions within a single graphene sheet are modeled using a Stillinger-Weber potential. Interatomic interactions between atoms in different sheets of graphene in the nanoplatelet are modeled using a Lennard-Jones interaction potential. The potential is validated by comparing molecular dynamics simulations of tensile deformation with the reported elastic constants for graphite. The graphite is found to fracture into graphene nanoplatelets when subjected to ˜15% tensile strain normal to the basal surface of the graphene stack, with an ultimate stress of 2.0 GPa and toughness of 0.33 GPa. This force field is useful to model molecular interactions in an important class of composite systems comprising 2D materials like graphene and multi-layer graphene nanoplatelets.
Koštrun, Sanja; Munic Kos, Vesna; Matanović Škugor, Maja; Palej Jakopović, Ivana; Malnar, Ivica; Dragojević, Snježana; Ralić, Jovica; Alihodžić, Sulejman
2017-06-16
The aim of this study was to investigate lipophilicity and cellular accumulation of rationally designed azithromycin and clarithromycin derivatives at the molecular level. The effect of substitution site and substituent properties on a global physico-chemical profile and cellular accumulation of investigated compounds was studied using calculated structural parameters as well as experimentally determined lipophilicity. In silico models based on the 3D structure of molecules were generated to investigate conformational effect on studied properties and to enable prediction of lipophilicity and cellular accumulation for this class of molecules based on non-empirical parameters. The applicability of developed models was explored on a validation and test sets and compared with previously developed empirical models. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Toward an epistemology of clinical psychoanalysis.
Ahumada, J L
1997-01-01
Epistemology emerges from the study of the ways knowledge is gained in the different fields of scientific endeavor. Current polemics on the nature of psychoanalytic knowledge involve counterposed misconceptions of the nature of mind. On one side clinical psychoanalysis is under siege from philosophical "hard science" stalwarts who, upholding as the unitary model of scientific knowledge of Galilean model of science built around the "well-behaved" variables of mechanics and cosmology, argue clinical psychoanalysis does not meet empirical criteria for the validation of its claims. On the other side, its empirical character is renounced by hermeneuticists who, agreeing with "hard science" advocates on what science is, dismiss the animal nature of human beings and hold that clinical psychoanalysis is not an empirical science but a "human" interpretive one. Taking Adolf Grünbaum's critique as its referent, this paper examines how, by ignoring the differences between "exact" and observational science, the "hard science" demand for well-behaved variables misconstrues the nature of events in the realm of mind. Criteria for an epistemology fit for the facts of clinical psychoanalysis as an empirical, observational science of mind are then proposed.
An Empirical Model for Vane-Type Vortex Generators in a Navier-Stokes Code
NASA Technical Reports Server (NTRS)
Dudek, Julianne C.
2005-01-01
An empirical model which simulates the effects of vane-type vortex generators in ducts was incorporated into the Wind-US Navier-Stokes computational fluid dynamics code. The model enables the effects of the vortex generators to be simulated without defining the details of the geometry within the grid, and makes it practical for researchers to evaluate multiple combinations of vortex generator arrangements. The model determines the strength of each vortex based on the generator geometry and the local flow conditions. Validation results are presented for flow in a straight pipe with a counter-rotating vortex generator arrangement, and the results are compared with experimental data and computational simulations using a gridded vane generator. Results are also presented for vortex generator arrays in two S-duct diffusers, along with accompanying experimental data. The effects of grid resolution and turbulence model are also examined.
Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel
2017-01-01
Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal’s energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum. PMID:28695119
Tricomi, Leonardo; Melchiori, Tommaso; Chiaramonti, David; Boulet, Micaël; Lavoie, Jean Michel
2017-01-01
Based upon the two fluid model (TFM) theory, a CFD model was implemented to investigate a cold multiphase-fluidized bubbling bed reactor. The key variable used to characterize the fluid dynamic of the experimental system, and compare it to model predictions, was the time-pressure drop induced by the bubble motion across the bed. This time signal was then processed to obtain the power spectral density (PSD) distribution of pressure fluctuations. As an important aspect of this work, the effect of the sampling time scale on the empirical power spectral density (PSD) was investigated. A time scale of 40 s was found to be a good compromise ensuring both simulation performance and numerical validation consistency. The CFD model was first numerically verified by mesh refinement process, after what it was used to investigate the sensitivity with regards to minimum fluidization velocity (as a calibration point for drag law), restitution coefficient, and solid pressure term while assessing his accuracy in matching the empirical PSD. The 2D model provided a fair match with the empirical time-averaged pressure drop, the relating fluctuations amplitude, and the signal's energy computed as integral of the PSD. A 3D version of the TFM was also used and it improved the match with the empirical PSD in the very first part of the frequency spectrum.
Sburlati, Elizabeth S; Lyneham, Heidi J; Mufson, Laura H; Schniering, Carolyn A
2012-06-01
In order to treat adolescent depression, a number of empirically supported treatments (ESTs) have been developed from both the cognitive behavioral therapy (CBT) and interpersonal psychotherapy (IPT-A) frameworks. Research has shown that in order for these treatments to be implemented in routine clinical practice (RCP), effective therapist training must be generated and provided. However, before such training can be developed, a good understanding of the therapist competencies needed to implement these ESTs is required. Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011) developed a model of therapist competencies for implementing CBT using the well-established Delphi technique. Given that IPT-A differs considerably to CBT, the current study aims to develop a model of therapist competencies for the implementation of IPT-A using a similar procedure as that applied in Sburlati et al. (Clin Child Fam Psychol Rev 14:89-109, 2011). This method involved: (1) identifying and reviewing an empirically supported IPT-A approach, (2) extracting therapist competencies required for the implementation of IPT-A, (3) consulting with a panel of IPT-A experts to generate an overall model of therapist competencies, and (4) validating the overall model with the IPT-A manual author. The resultant model offers an empirically derived set of competencies necessary for effectively treating adolescent depression using IPT-A and has wide implications for the development of therapist training, competence assessment measures, and evidence-based practice guidelines. This model, therefore, provides an empirical framework for the development of dissemination and implementation programs aimed at ensuring that adolescents with depression receive effective care in RCP settings. Key similarities and differences between CBT and IPT-A, and the therapist competencies required for implementing these treatments, are also highlighted throughout this article.
Kang, Xiaofeng; Dennison Himmelfarb, Cheryl R; Li, Zheng; Zhang, Jian; Lv, Rong; Guo, Jinyu
2015-01-01
The Self-care of Heart Failure Index (SCHFI) is an empirically tested instrument for measuring the self-care of patients with heart failure. The aim of this study was to develop a simplified Chinese version of the SCHFI and provide evidence for its construct validity. A total of 182 Chinese with heart failure were surveyed. A 2-step structural equation modeling procedure was applied to test construct validity. Factor analysis showed 3 factors explaining 43% of the variance. Structural equation model confirmed that self-care maintenance, self-care management, and self-care confidence are indeed indicators of self-care, and self-care confidence was a positive and equally strong predictor of self-care maintenance and self-care management. Moreover, self-care scores were correlated with the Partners in Health Scale, indicating satisfactory concurrent validity. The Chinese version of the SCHFI is a theory-based instrument for assessing self-care of Chinese patients with heart failure.
Cooling tower plume - model and experiment
NASA Astrophysics Data System (ADS)
Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri
The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.
NASA Astrophysics Data System (ADS)
Tonitto, C.; Gurwick, N. P.
2012-12-01
Policy initiatives to reduce greenhouse gas emissions (GHG) have promoted the development of agricultural management protocols to increase SOC storage and reduce GHG emissions. We review approaches for quantifying N2O flux from agricultural landscapes. We summarize the temporal and spatial extent of observations across representative soil classes, climate zones, cropping systems, and management scenarios. We review applications of simulation and empirical modeling approaches and compare validation outcomes across modeling tools. Subsequently, we review current model application in agricultural management protocols. In particular, we compare approaches adapted for compliance with the California Global Warming Solutions Act, the Alberta Climate Change and Emissions Management Act, and by the American Carbon Registry. In the absence of regional data to drive model development, policies that require GHG quantification often use simple empirical models based on highly aggregated data of N2O flux as a function of applied N - Tier 1 models according to IPCC categorization. As participants in development of protocols that could be used in carbon offset markets, we observed that stakeholders outside of the biogeochemistry community favored outcomes from simulation modeling (Tier 3) rather than empirical modeling (Tier 2). In contrast, scientific advisors were more accepting of outcomes based on statistical approaches that rely on local observations, and their views sometimes swayed policy practitioners over the course of policy development. Both Tier 2 and Tier 3 approaches have been implemented in current policy development, and it is important that the strengths and limitations of both approaches, in the face of available data, be well-understood by those drafting and adopting policies and protocols. The reliability of all models is contingent on sufficient observations for model development and validation. Simulation models applied without site-calibration generally result in poor validation results, and this point particularly needs to be emphasized during policy development. For cases where sufficient calibration data are available, simulation models have demonstrated the ability to capture seasonal patterns of N2O flux. The reliability of statistical models likewise depends on data availability. Because soil moisture is a significant driver of N2O flux, the best outcomes occur when empirical models are applied to systems with relevant soil classification and climate. The structure of current carbon offset protocols is not well-aligned with a budgetary approach to GHG accounting. Current protocols credit field-scale reduction in N2O flux as a result of reduced fertilizer use. Protocols do not award farmers credit for reductions in CO2 emissions resulting from reduced production of synthetic N fertilizer. To achieve the greatest GHG emission reductions through reduced synthetic N production and reduced landscape N saturation requires a re-envisioning of the agricultural landscape to include cropping systems with legume and manure N sources. The current focus on on-farm GHG sources focuses credits on simple reductions of N applied in conventional systems rather than on developing cropping systems which promote higher recycling and retention of N.
Fisher's geometrical model emerges as a property of complex integrated phenotypic networks.
Martin, Guillaume
2014-05-01
Models relating phenotype space to fitness (phenotype-fitness landscapes) have seen important developments recently. They can roughly be divided into mechanistic models (e.g., metabolic networks) and more heuristic models like Fisher's geometrical model. Each has its own drawbacks, but both yield testable predictions on how the context (genomic background or environment) affects the distribution of mutation effects on fitness and thus adaptation. Both have received some empirical validation. This article aims at bridging the gap between these approaches. A derivation of the Fisher model "from first principles" is proposed, where the basic assumptions emerge from a more general model, inspired by mechanistic networks. I start from a general phenotypic network relating unspecified phenotypic traits and fitness. A limited set of qualitative assumptions is then imposed, mostly corresponding to known features of phenotypic networks: a large set of traits is pleiotropically affected by mutations and determines a much smaller set of traits under optimizing selection. Otherwise, the model remains fairly general regarding the phenotypic processes involved or the distribution of mutation effects affecting the network. A statistical treatment and a local approximation close to a fitness optimum yield a landscape that is effectively the isotropic Fisher model or its extension with a single dominant phenotypic direction. The fit of the resulting alternative distributions is illustrated in an empirical data set. These results bear implications on the validity of Fisher's model's assumptions and on which features of mutation fitness effects may vary (or not) across genomic or environmental contexts.
Factorial validity of the Movement Assessment Battery for Children-2 (age band 2).
Wagner, Matthias Oliver; Kastner, Julia; Petermann, Franz; Bös, Klaus
2011-01-01
The Movement Assessment Battery for Children-2 (M-ABC-2) is one of the most commonly used tests for the diagnosis of specific developmental disorders of motor function (F82). The M-ABC-2 comprises eight subtests per age band (AB) that are assigned to three dimensions: manual dexterity, aiming and catching, and balance. However, while previous exploratory findings suggested the correctness of the assumption of factorial validity, there is no empirical evidence that the M-ABC-2 subtests allow for a valid reproduction of the postulated factorial structure. The purpose of this study was to empirically confirm the factorial validity of the M-ABC-2. The German normative sample of AB2 (7-10 years; N=323) was used as the study sample for the empirical analyses. Confirmatory factor analysis was used to verify the factorial validity of the M-ABC-2 (AB2). The incremental fit indices (χ2=28.675; df=17; Bollen-Stine p value=0.318; RMSEA=0.046 [0.011-0.075]; SRMR=0.038; CFI=0.960) provided evidence for the factorial validity of the M-ABC-2 (AB2). However, because of a lack of empirical verification for convergent and discriminant validity, there is still no evidence that F82 can be diagnosed using M-ABC-2 (AB2). Copyright © 2010 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Irvin, Larry K.; Horner, Robert H.; Ingram, Kimberly; Todd, Anne W.; Sugai, George; Sampson, Nadia Katul; Boland, Joseph B.
2006-01-01
In this evaluation we used Messick's construct validity as a conceptual framework for an empirical study assessing the validity of use, utility, and impact of office discipline referral (ODR) measures for data-based decision making about student behavior in schools. The Messick approach provided a rubric for testing the fit of our theory of use of…
Nursing intellectual capital theory: operationalization and empirical validation of concepts.
Covell, Christine L; Sidani, Souraya
2013-08-01
To present the operationalization of concepts in the nursing intellectual capital theory and the results of a methodological study aimed at empirically validating the concepts. The nursing intellectual capital theory proposes that the stocks of nursing knowledge in an organization are embedded in two concepts, nursing human capital and nursing structural capital. The theory also proposes that two concepts in the work environment, nurse staffing and employer support for nursing continuing professional development, influence nursing human capital. A cross-sectional design. A systematic three-step process was used to operationalize the concepts of the theory. In 2008, data were collected for 147 inpatient units from administrative departments and unit managers in 6 Canadian hospitals. Exploratory factor analyses were conducted to determine if the indicator variables accurately reflect their respective concepts. The proposed indicator variables collectively measured the nurse staffing concept. Three indicators were retained to construct nursing human capital: clinical expertise and experience concept. The nursing structural capital and employer support for nursing continuing professional development concepts were not validated empirically. The nurse staffing and the nursing human capital: clinical expertise and experience concepts will be brought forward for further model testing. Refinement for some of the indicator variables of the concepts is indicated. Additional research is required with different sources of data to confirm the findings. © 2012 Blackwell Publishing Ltd.
Polarimetry noise in fiber-based optical coherence tomography instrumentation
Zhang, Ellen Ziyi; Vakoc, Benjamin J.
2011-01-01
High noise levels in fiber-based polarization-sensitive optical coherence tomography (PS-OCT) have broadly limited its clinical utility. In this study we investigate contribution of polarization mode dispersion (PMD) to the polarimetry noise. We develop numerical models of the PS-OCT system including PMD and validate these models with empirical data. Using these models, we provide a framework for predicting noise levels, for processing signals to reduce noise, and for designing an optimized system. PMID:21935044
Adolescents' View of Family Functioning: A Validation of the RES.
ERIC Educational Resources Information Center
Chambliss, Catherine; And Others
The contextual model argues that people in a relationship must experience a sense of loyalty, fairness, and reciprocity in order to build commitment and trust and provide ongoing mutual care. The Relational Ethics Scale (RES), which assess key relational variables, was developed for use in empirical research to test the theoretical framework of…
ERIC Educational Resources Information Center
Findik Coskuncay, Duygu; Ozkan, Sevgi
2013-01-01
Through the rapid expansion of information technologies, Learning Management Systems have become one of the most important innovations for delivering education. However, successful implementation and management of these systems are primarily based on the instructors' adoption. In this context, this study aims to understand behavioral intentions…
The Role of Conceptual and Linguistic Ontologies in Interpreting Spatial Discourse
ERIC Educational Resources Information Center
Bateman, John; Tenbrink, Thora; Farrar, Scott
2007-01-01
This article argues that a clear division between two sources of information--one oriented to world knowledge, the other to linguistic semantics--offers a framework within which mechanisms for modelling the highly flexible relation between language and interpretation necessary for natural discourse can be specified and empirically validated.…
NREL: International Activities - Pakistan Resource Maps
. The high-resolution (1-km) annual wind power maps were developed using a numerical modeling approach along with NREL's empirical validation methodology. The high-resolution (10-km) annual and seasonal KB) | High-Res (ZIP 281 KB) 40-km Resolution Annual Maps (Direct) Low-Res (JPG 156 KB) | High-Res
ERIC Educational Resources Information Center
Goddard, Roger D.; LoGerfo, Laura F.
2007-01-01
This article presents a theoretical rationale and empirical evidence regarding the validity of scores obtained from two competing approaches to operationalizing scale items to measure emergent organizational properties. The authors consider whether items in scales intended to measure organizational properties should prompt survey takers to provide…
ERIC Educational Resources Information Center
Fraivillig, Judith L.
2018-01-01
Understanding place value is a critical and foundational competency for elementary mathematics. Classroom teachers who endeavor to promote place-value development adopt a variety of established practices to varying degrees of effectiveness. In parallel, researchers have validated models of how young children acquire place-value understanding.…
ERIC Educational Resources Information Center
Ardoin, Scott P.; Williams, Jessica C.; Christ, Theodore J.; Klubnik, Cynthia; Wellborn, Claire
2010-01-01
Beyond reliability and validity, measures used to model student growth must consist of multiple probes that are equivalent in level of difficulty to establish consistent measurement conditions across time. Although existing evidence supports the reliability of curriculum-based measurement in reading (CBMR), few studies have empirically evaluated…
A Comprehensive Inclusion Program for Kindergarten Children with Autism Spectrum Disorder
ERIC Educational Resources Information Center
Sainato, Diane M.; Morrison, Rebecca S.; Jung, Sunhwa; Axe, Judah; Nixon, Patricia A.
2015-01-01
To date, reports of empirically validated comprehensive intervention programs for children with autism spectrum disorder (ASD) have been limited to preschool-age children. We examined the effects of a model inclusive kindergarten program for children with ASD. Forty-one children received instruction in an inclusive kindergarten program with their…
Interaction between Task Oriented and Affective Information Processing in Cognitive Robotics
NASA Astrophysics Data System (ADS)
Haazebroek, Pascal; van Dantzig, Saskia; Hommel, Bernhard
There is an increasing interest in endowing robots with emotions. Robot control however is still often very task oriented. We present a cognitive architecture that allows the combination of and interaction between task representations and affective information processing. Our model is validated by comparing simulation results with empirical data from experimental psychology.
ERIC Educational Resources Information Center
Confer, Jacob Russell
2013-01-01
The symptoms, assessment, and treatments of Post Traumatic Stress Disorder (PTSD) have been empirically investigated to the extent that there is a breadth of valid and reliable instruments investigating this psychopathological syndrome. There, too, exists a substantial evidence base for various treatment models demonstrating effectiveness in…
The Subjective Well-Being Construct: A Test of Its Convergent, Discriminant, and Factorial Validity
ERIC Educational Resources Information Center
Arthaud-day, Marne L.; Rode, Joseph C.; Mooney, Christine H.; Near, Janet P.
2005-01-01
Using structural equation modeling, we found empirical support for the prevailing theory that subjective well-being consists of three domains: (1) cognitive evaluations of one's life (i.e., life satisfaction or happiness); (2) positive affect; and (3) negative affect. Multiple indicators of satisfaction/happiness were shown to have strong…
ERIC Educational Resources Information Center
Kaspar, Roman; Döring, Ottmar; Wittmann, Eveline; Hartig, Johannes; Weyland, Ulrike; Nauerth, Annette; Möllers, Michaela; Rechenbach, Simone; Simon, Julia; Worofka, Iberé
2016-01-01
Valid and reliable standardized assessment of nursing competencies is needed to monitor the quality of vocational education and training (VET) in nursing and evaluate learning outcomes for care work trainees with increasingly heterogeneous learning backgrounds. To date, however, the modeling of professional competencies has not yet evolved into…
ERIC Educational Resources Information Center
Poekert, Philip; Alexandrou, Alex; Shannon, Darbianne
2016-01-01
Teacher leadership is increasingly being touted as a practical response to guide teacher learning in school improvement and policy reform efforts. However, the field of research on teacher leadership in relation to post-compulsory educational development has been and remains largely atheoretical to date. This empirical study proposes a grounded…
Collective Trust: A Social Indicator of Instructional Capacity
ERIC Educational Resources Information Center
Adams, Curt M.
2013-01-01
Purpose: The purpose of this study is to test the validity of using collective trust as a social indicator of instructional capacity. Design/methodology/approach: A hypothesized model was advanced for the empirical investigation. Collective trust was specified as a latent construct with observable indicators being principal trust in faculty (PTF),…
Engineering applications of strong ground motion simulation
NASA Astrophysics Data System (ADS)
Somerville, Paul
1993-02-01
The formulation, validation and application of a procedure for simulating strong ground motions for use in engineering practice are described. The procedure uses empirical source functions (derived from near-source strong motion recordings of small earthquakes) to provide a realistic representation of effects such as source radiation that are difficult to model at high frequencies due to their partly stochastic behavior. Wave propagation effects are modeled using simplified Green's functions that are designed to transfer empirical source functions from their recording sites to those required for use in simulations at a specific site. The procedure has been validated against strong motion recordings of both crustal and subduction earthquakes. For the validation process we choose earthquakes whose source models (including a spatially heterogeneous distribution of the slip of the fault) are independently known and which have abundant strong motion recordings. A quantitative measurement of the fit between the simulated and recorded motion in this validation process is used to estimate the modeling and random uncertainty associated with the simulation procedure. This modeling and random uncertainty is one part of the overall uncertainty in estimates of ground motions of future earthquakes at a specific site derived using the simulation procedure. The other contribution to uncertainty is that due to uncertainty in the source parameters of future earthquakes that affect the site, which is estimated from a suite of simulations generated by varying the source parameters over their ranges of uncertainty. In this paper, we describe the validation of the simulation procedure for crustal earthquakes against strong motion recordings of the 1989 Loma Prieta, California, earthquake, and for subduction earthquakes against the 1985 Michoacán, Mexico, and Valparaiso, Chile, earthquakes. We then show examples of the application of the simulation procedure to the estimatation of the design response spectra for crustal earthquakes at a power plant site in California and for subduction earthquakes in the Seattle-Portland region. We also demonstrate the use of simulation methods for modeling the attenuation of strong ground motion, and show evidence of the effect of critical reflections from the lower crust in causing the observed flattening of the attenuation of strong ground motion from the 1988 Saguenay, Quebec, and 1989 Loma Prieta earthquakes.
Seeking Empirical Validity in an Assurance of Learning System
ERIC Educational Resources Information Center
Avery, Sherry L.; McWhorter, Rochell R.; Lirely, Roger; Doty, H. Harold
2014-01-01
Business schools have established measurement tools to support their assurance of learning (AoL) systems and to assess student achievement of learning objectives. However, business schools have not required their tools to be empirically validated, thus ensuring that they measure what they are intended to measure. The authors propose confirmatory…
Prediction of early summer rainfall over South China by a physical-empirical model
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen
2014-10-01
In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.
An empirically based model for knowledge management in health care organizations.
Sibbald, Shannon L; Wathen, C Nadine; Kothari, Anita
2016-01-01
Knowledge management (KM) encompasses strategies, processes, and practices that allow an organization to capture, share, store, access, and use knowledge. Ideal KM combines different sources of knowledge to support innovation and improve performance. Despite the importance of KM in health care organizations (HCOs), there has been very little empirical research to describe KM in this context. This study explores KM in HCOs, focusing on the status of current intraorganizational KM. The intention is to provide insight for future studies and model development for effective KM implementation in HCOs. A qualitative methods approach was used to create an empirically based model of KM in HCOs. Methods included (a) qualitative interviews (n = 24) with senior leadership to identify types of knowledge important in these roles plus current information-seeking behaviors/needs and (b) in-depth case study with leaders in new executive positions (n = 2). The data were collected from 10 HCOs. Our empirically based model for KM was assessed for face and content validity. The findings highlight the paucity of formal KM in our sample HCOs. Organizational culture, leadership, and resources are instrumental in supporting KM processes. An executive's knowledge needs are extensive, but knowledge assets are often limited or difficult to acquire as much of the available information is not in a usable format. We propose an empirically based model for KM to highlight the importance of context (internal and external), and knowledge seeking, synthesis, sharing, and organization. Participants who reviewed the model supported its basic components and processes, and potential for incorporating KM into organizational processes. Our results articulate ways to improve KM, increase organizational learning, and support evidence-informed decision-making. This research has implications for how to better integrate evidence and knowledge into organizations while considering context and the role of organizational processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
Gartner, Joseph E.; Cannon, Susan H.; Santi, Paul M
2014-01-01
Debris flows and sediment-laden floods in the Transverse Ranges of southern California pose severe hazards to nearby communities and infrastructure. Frequent wildfires denude hillslopes and increase the likelihood of these hazardous events. Debris-retention basins protect communities and infrastructure from the impacts of debris flows and sediment-laden floods and also provide critical data for volumes of sediment deposited at watershed outlets. In this study, we supplement existing data for the volumes of sediment deposited at watershed outlets with newly acquired data to develop new empirical models for predicting volumes of sediment produced by watersheds located in the Transverse Ranges of southern California. The sediment volume data represent a broad sample of conditions found in Ventura, Los Angeles and San Bernardino Counties, California. The measured volumes of sediment, watershed morphology, distributions of burn severity within each watershed, the time since the most recent fire, triggering storm rainfall conditions, and engineering soil properties were analyzed using multiple linear regressions to develop two models. A “long-term model” was developed for predicting volumes of sediment deposited by both debris flows and floods at various times since the most recent fire from a database of volumes of sediment deposited by a combination of debris flows and sediment-laden floods with no time limit since the most recent fire (n = 344). A subset of this database was used to develop an “emergency assessment model” for predicting volumes of sediment deposited by debris flows within two years of a fire (n = 92). Prior to developing the models, 32 volumes of sediment, and related parameters for watershed morphology, burn severity and rainfall conditions were retained to independently validate the long-term model. Ten of these volumes of sediment were deposited by debris flows within two years of a fire and were used to validate the emergency assessment model. The models were validated by comparing predicted and measured volumes of sediment. These validations were also performed for previously developed models and identify that the models developed here best predict volumes of sediment for burned watersheds in comparison to previously developed models.
Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm.
Tizzoni, Michele; Bajardi, Paolo; Poletto, Chiara; Ramasco, José J; Balcan, Duygu; Gonçalves, Bruno; Perra, Nicola; Colizza, Vittoria; Vespignani, Alessandro
2012-12-13
Mathematical and computational models for infectious diseases are increasingly used to support public-health decisions; however, their reliability is currently under debate. Real-time forecasts of epidemic spread using data-driven models have been hindered by the technical challenges posed by parameter estimation and validation. Data gathered for the 2009 H1N1 influenza crisis represent an unprecedented opportunity to validate real-time model predictions and define the main success criteria for different approaches. We used the Global Epidemic and Mobility Model to generate stochastic simulations of epidemic spread worldwide, yielding (among other measures) the incidence and seeding events at a daily resolution for 3,362 subpopulations in 220 countries. Using a Monte Carlo Maximum Likelihood analysis, the model provided an estimate of the seasonal transmission potential during the early phase of the H1N1 pandemic and generated ensemble forecasts for the activity peaks in the northern hemisphere in the fall/winter wave. These results were validated against the real-life surveillance data collected in 48 countries, and their robustness assessed by focusing on 1) the peak timing of the pandemic; 2) the level of spatial resolution allowed by the model; and 3) the clinical attack rate and the effectiveness of the vaccine. In addition, we studied the effect of data incompleteness on the prediction reliability. Real-time predictions of the peak timing are found to be in good agreement with the empirical data, showing strong robustness to data that may not be accessible in real time (such as pre-exposure immunity and adherence to vaccination campaigns), but that affect the predictions for the attack rates. The timing and spatial unfolding of the pandemic are critically sensitive to the level of mobility data integrated into the model. Our results show that large-scale models can be used to provide valuable real-time forecasts of influenza spreading, but they require high-performance computing. The quality of the forecast depends on the level of data integration, thus stressing the need for high-quality data in population-based models, and of progressive updates of validated available empirical knowledge to inform these models.
A unifying kinetic framework for modeling oxidoreductase-catalyzed reactions.
Chang, Ivan; Baldi, Pierre
2013-05-15
Oxidoreductases are a fundamental class of enzymes responsible for the catalysis of oxidation-reduction reactions, crucial in most bioenergetic metabolic pathways. From their common root in the ancient prebiotic environment, oxidoreductases have evolved into diverse and elaborate protein structures with specific kinetic properties and mechanisms adapted to their individual functional roles and environmental conditions. While accurate kinetic modeling of oxidoreductases is thus important, current models suffer from limitations to the steady-state domain, lack empirical validation or are too specialized to a single system or set of conditions. To address these limitations, we introduce a novel unifying modeling framework for kinetic descriptions of oxidoreductases. The framework is based on a set of seven elementary reactions that (i) form the basis for 69 pairs of enzyme state transitions for encoding various specific microscopic intra-enzyme reaction networks (micro-models), and (ii) lead to various specific macroscopic steady-state kinetic equations (macro-models) via thermodynamic assumptions. Thus, a synergistic bridge between the micro and macro kinetics can be achieved, enabling us to extract unitary rate constants, simulate reaction variance and validate the micro-models using steady-state empirical data. To help facilitate the application of this framework, we make available RedoxMech: a Mathematica™ software package that automates the generation and customization of micro-models. The Mathematica™ source code for RedoxMech, the documentation and the experimental datasets are all available from: http://www.igb.uci.edu/tools/sb/metabolic-modeling. pfbaldi@ics.uci.edu Supplementary data are available at Bioinformatics online.
Piloted Evaluation of a UH-60 Mixer Equivalent Turbulence Simulation Model
NASA Technical Reports Server (NTRS)
Lusardi, Jeff A.; Blanken, Chris L.; Tischeler, Mark B.
2002-01-01
A simulation study of a recently developed hover/low speed Mixer Equivalent Turbulence Simulation (METS) model for the UH-60 Black Hawk helicopter was conducted in the NASA Ames Research Center Vertical Motion Simulator (VMS). The experiment was a continuation of previous work to develop a simple, but validated, turbulence model for hovering rotorcraft. To validate the METS model, two experienced test pilots replicated precision hover tasks that had been conducted in an instrumented UH-60 helicopter in turbulence. Objective simulation data were collected for comparison with flight test data, and subjective data were collected that included handling qualities ratings and pilot comments for increasing levels of turbulence. Analyses of the simulation results show good analytic agreement between the METS model and flight test data, with favorable pilot perception of the simulated turbulence. Precision hover tasks were also repeated using the more complex rotating-frame SORBET (Simulation Of Rotor Blade Element Turbulence) model to generate turbulence. Comparisons of the empirically derived METS model with the theoretical SORBET model show good agreement providing validation of the more complex blade element method of simulating turbulence.
Cross validation issues in multiobjective clustering
Brusco, Michael J.; Steinley, Douglas
2018-01-01
The implementation of multiobjective programming methods in combinatorial data analysis is an emergent area of study with a variety of pragmatic applications in the behavioural sciences. Most notably, multiobjective programming provides a tool for analysts to model trade offs among competing criteria in clustering, seriation, and unidimensional scaling tasks. Although multiobjective programming has considerable promise, the technique can produce numerically appealing results that lack empirical validity. With this issue in mind, the purpose of this paper is to briefly review viable areas of application for multiobjective programming and, more importantly, to outline the importance of cross-validation when using this method in cluster analysis. PMID:19055857
Empirical Model for Predicting Rockfall Trajectory Direction
NASA Astrophysics Data System (ADS)
Asteriou, Pavlos; Tsiambaos, George
2016-03-01
A methodology for the experimental investigation of rockfall in three-dimensional space is presented in this paper, aiming to assist on-going research of the complexity of a block's response to impact during a rockfall. An extended laboratory investigation was conducted, consisting of 590 tests with cubical and spherical blocks made of an artificial material. The effects of shape, slope angle and the deviation of the post-impact trajectory are examined as a function of the pre-impact trajectory direction. Additionally, an empirical model is proposed that estimates the deviation of the post-impact trajectory as a function of the pre-impact trajectory with respect to the slope surface and the slope angle. This empirical model is validated by 192 small-scale field tests, which are also presented in this paper. Some important aspects of the three-dimensional nature of rockfall phenomena are highlighted that have been hitherto neglected. The 3D space data provided in this study are suitable for the calibration and verification of rockfall analysis software that has become increasingly popular in design practice.
Economic modelling with low-cognition agents
NASA Astrophysics Data System (ADS)
Ormerod, Paul
2006-10-01
The standard socio-economic model (SSSM) postulates very considerable cognitive powers on the part of its agents. They are able to gather all relevant information in any given situation, and to take the optimal decision on the basis of it, given their tastes and preferences. This behavioural rule is postulated to be universal. The concept of bounded rationality relaxes this somewhat, by permitting agents to have access to only limited amounts of information. But agents still optimise subject to their information set and tastes. Empirical work in economics over the past 20 years or so has shown that in general these behavioural postulates lack empirical validity. Instead, agents appear to have limited ability to gather information, and use simple rules of thumb to process the information which they have in order to take decisions. Building theoretical models on these realistic foundations which give better accounts of empirical phenomena than does the SSSM is an important challenge to both economists and econophysicists. Considerable progress has already been made in a short space of time, and examples are given in this paper.
Hattori, Masasi
2016-12-01
This paper presents a new theory of syllogistic reasoning. The proposed model assumes there are probabilistic representations of given signature situations. Instead of conducting an exhaustive search, the model constructs an individual-based "logical" mental representation that expresses the most probable state of affairs, and derives a necessary conclusion that is not inconsistent with the model using heuristics based on informativeness. The model is a unification of previous influential models. Its descriptive validity has been evaluated against existing empirical data and two new experiments, and by qualitative analyses based on previous empirical findings, all of which supported the theory. The model's behavior is also consistent with findings in other areas, including working memory capacity. The results indicate that people assume the probabilities of all target events mentioned in a syllogism to be almost equal, which suggests links between syllogistic reasoning and other areas of cognition. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
New robust statistical procedures for the polytomous logistic regression models.
Castilla, Elena; Ghosh, Abhik; Martin, Nirian; Pardo, Leandro
2018-05-17
This article derives a new family of estimators, namely the minimum density power divergence estimators, as a robust generalization of the maximum likelihood estimator for the polytomous logistic regression model. Based on these estimators, a family of Wald-type test statistics for linear hypotheses is introduced. Robustness properties of both the proposed estimators and the test statistics are theoretically studied through the classical influence function analysis. Appropriate real life examples are presented to justify the requirement of suitable robust statistical procedures in place of the likelihood based inference for the polytomous logistic regression model. The validity of the theoretical results established in the article are further confirmed empirically through suitable simulation studies. Finally, an approach for the data-driven selection of the robustness tuning parameter is proposed with empirical justifications. © 2018, The International Biometric Society.
Sheehan, D V; Sheehan, K H
1982-08-01
The history of the classification of anxiety, hysterical, and hypochondriacal disorders is reviewed. Problems in the ability of current classification schemes to predict, control, and describe the relationship between the symptoms and other phenomena are outlined. Existing classification schemes failed the first test of a good classification model--that of providing categories that are mutually exclusive. The independence of these diagnostic categories from each other does not appear to hold up on empirical testing. In the absence of inherently mutually exclusive categories, further empirical investigation of these classes is obstructed since statistically valid analysis of the nominal data and any useful multivariate analysis would be difficult if not impossible. It is concluded that the existing classifications are unsatisfactory and require some fundamental reconceptualization.
Measuring Work Environment and Performance in Nursing Homes
Temkin-Greener, Helena; Zheng, Nan (Tracy); Katz, Paul; Zhao, Hongwei; Mukamel, Dana B.
2008-01-01
Background Qualitative studies of the nursing home work environment have long suggested that such attributes as leadership and communication may be related to nursing home performance, including residents' outcomes. However, empirical studies examining these relationships have been scant. Objectives This study is designed to: develop an instrument for measuring nursing home work environment and perceived work effectiveness; test the reliability and validity of the instrument; and identify individual and facility-level factors associated with better facility performance. Research Design and Methods The analysis was based on survey responses provided by managers (N=308) and direct care workers (N=7,418) employed in 162 facilities throughout New York State. Exploratory factor analysis, Chronbach's alphas, analysis of variance, and regression models were used to assess instrument reliability and validity. Multivariate regression models, with fixed facility effects, were used to examine factors associated with work effectiveness. Results The reliability and the validity of the survey instrument for measuring work environment and perceived work effectiveness has been demonstrated. Several individual (e.g. occupation, race) and facility characteristics (e.g. management style, workplace conditions, staffing) that are significant predictors of perceived work effectiveness were identified. Conclusions The organizational performance model used in this study recognizes the multidimensionality of the work environment in nursing homes. Our findings suggest that efforts at improving work effectiveness must also be multifaceted. Empirical findings from such a line of research may provide insights for improving the quality of the work environment and ultimately the quality of residents' care. PMID:19330892
NASA Technical Reports Server (NTRS)
Taber, William; Port, Dan
2014-01-01
At the Mission Design and Navigation Software Group at the Jet Propulsion Laboratory we make use of finite exponential based defect models to aid in maintenance planning and management for our widely used critical systems. However a number of pragmatic issues arise when applying defect models for a post-release system in continuous use. These include: how to utilize information from problem reports rather than testing to drive defect discovery and removal effort, practical model calibration, and alignment of model assumptions with our environment.
A Conversation Analysis-Informed Test of L2 Aural Pragmatic Comprehension
ERIC Educational Resources Information Center
Walters, F. Scott
2009-01-01
Speech act theory-based, second language pragmatics testing (SLPT) raises test-validation issues owing to a lack of correspondence with empirical conversational data. On the assumption that conversation analysis (CA) provides a more accurate account of language use, it is suggested that CA serve as a more empirically valid basis for SLPT…
An integrated conceptual framework for evaluating and improving 'understanding' in informed consent.
Bossert, Sabine; Strech, Daniel
2017-10-17
The development of understandable informed consent (IC) documents has proven to be one of the most important challenges in research with humans as well as in healthcare settings. Therefore, evaluating and improving understanding has been of increasing interest for empirical research on IC. However, several conceptual and practical challenges for the development of understandable IC documents remain unresolved. In this paper, we will outline and systematize some of these challenges. On the basis of our own experiences in empirical user testing of IC documents as well as the relevant literature on understanding in IC, we propose an integrated conceptual model for the development of understandable IC documents. The proposed conceptual model integrates different methods for the participatory improvement of written information, including IC, as well as quantitative methods for measuring understanding in IC. In most IC processes, understandable written information is an important prerequisite for valid IC. To improve the quality of IC documents, a conceptual model for participatory procedures of testing, revising, and retesting can be applied. However, the model presented in this paper needs further theoretical and empirical elaboration and clarification of several conceptual and practical challenges.
Density matrix Monte Carlo modeling of quantum cascade lasers
NASA Astrophysics Data System (ADS)
Jirauschek, Christian
2017-10-01
By including elements of the density matrix formalism, the semiclassical ensemble Monte Carlo method for carrier transport is extended to incorporate incoherent tunneling, known to play an important role in quantum cascade lasers (QCLs). In particular, this effect dominates electron transport across thick injection barriers, which are frequently used in terahertz QCL designs. A self-consistent model for quantum mechanical dephasing is implemented, eliminating the need for empirical simulation parameters. Our modeling approach is validated against available experimental data for different types of terahertz QCL designs.
Leach, Colin Wayne; van Zomeren, Martijn; Zebel, Sven; Vliek, Michael L W; Pennekamp, Sjoerd F; Doosje, Bertjan; Ouwerkerk, Jaap W; Spears, Russell
2008-07-01
Recent research shows individuals' identification with in-groups to be psychologically important and socially consequential. However, there is little agreement about how identification should be conceptualized or measured. On the basis of previous work, the authors identified 5 specific components of in-group identification and offered a hierarchical 2-dimensional model within which these components are organized. Studies 1 and 2 used confirmatory factor analysis to validate the proposed model of self-definition (individual self-stereotyping, in-group homogeneity) and self-investment (solidarity, satisfaction, and centrality) dimensions, across 3 different group identities. Studies 3 and 4 demonstrated the construct validity of the 5 components by examining their (concurrent) correlations with established measures of in-group identification. Studies 5-7 demonstrated the predictive and discriminant validity of the 5 components by examining their (prospective) prediction of individuals' orientation to, and emotions about, real intergroup relations. Together, these studies illustrate the conceptual and empirical value of a hierarchical multicomponent model of in-group identification.
Synthetic Teammates as Team Players: Coordination of Human and Synthetic Teammates
2016-05-31
distribution is unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT This project is part of a larger effort that focuses on human-automation coordination in the...context of the development, integration, and validation of a computational cognitive model that acts as a full-fledged synthetic teammate on an...integrated the synthetic teammate model into the CERTT II (Cognitive Engineering Research on Team Tasks II) testbed in order to empirically address these
Empirical Measurement and Model Validation of Infrared Spectra of Contaminated Surfaces
NASA Astrophysics Data System (ADS)
Archer, Sean
The goal of this thesis was to validate predicted infrared spectra of liquid contaminated surfaces from a micro-scale bi-directional reflectance distribution function (BRDF) model through the use of empirical measurement. Liquid contaminated surfaces generally require more sophisticated radiometric modeling to numerically describe surface properties. The Digital Image and Remote Sensing Image Generation (DIRSIG) model utilizes radiative transfer modeling to generate synthetic imagery for a variety of applications. Aside from DIRSIG, a micro-scale model known as microDIRSIG has been developed as a rigorous ray tracing physics-based model that could predict the BRDF of geometric surfaces that are defined as micron to millimeter resolution facets. The model offers an extension from the conventional BRDF models by allowing contaminants to be added as geometric objects to a micro-facet surface. This model was validated through the use of Fourier transform infrared spectrometer measurements. A total of 18 different substrate and contaminant combinations were measured and compared against modeled outputs. The substrates used in this experiment were wood and aluminum that contained three different paint finishes. The paint finishes included no paint, Krylon ultra-flat black, and Krylon glossy black. A silicon based oil (SF96) was measured out and applied to each surface to create three different contamination cases for each surface. Radiance in the longwave infrared region of the electromagnetic spectrum was measured by a Design and Prototypes (D&P) Fourier transform infrared spectrometer and a Physical Sciences Inc. Adaptive Infrared Imaging Spectroradiometer (AIRIS). The model outputs were compared against the measurements quantitatively in both the emissivity and radiance domains. A temperature emissivity separation (TES) algorithm had to be applied to the measured radiance spectra for comparison with the microDIRSIG predicted emissivity spectra. The model predicted emissivity spectra was also forward modeled through a DIRSIG simulation for comparisons to the radiance measurements. The results showed a promising agreement for homogeneous surfaces with liquid contamination that could be well characterized geometrically. Limitations arose in substrates that were modeled as homogeneous surfaces, but had spatially varying artifacts due to uncertainties with contaminant and surface interactions. There is high desire for accurate physics based modeling of liquid contaminated surfaces and this validation framework may be extended to include a wider array of samples for more realistic natural surfaces that are often found in real world scenarios.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
2012-01-01
Background Previous research has addressed the relationship between customer satisfaction, perceived quality and customer loyalty intentions in consumer markets. In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Methods This research focuses on hospital patients as participants in the process of healthcare procurement. Empirical data were obtained from six Chinese public hospitals in Shanghai. A total of 630 questionnaires were collected in two studies. Study 1 tested the research instruments, and Study 2 tested the three models. Confirmatory factor analysis was used to assess the scales’ construct validity by testing convergent and discriminant validity. A structural equation model (SEM) specified the distinctions between each construct. A comparison of the three theoretical models was conducted via AMOS analysis. Results The results of the SEM demonstrate that quality and satisfaction are distinct concepts and that the first model (satisfaction mediates quality and loyalty) is the most appropriate one in the context of the Chinese healthcare environment. Conclusions In this study, we test and compare three theoretical models of the quality–satisfaction–loyalty relationship in the Chinese healthcare system. Findings show that perceived quality improvement does not lead directly to customer loyalty. The strategy of using quality improvement to maintain patient loyalty depends on the level of patient satisfaction. This implies that the measurement of patient experiences should include topics of importance for patients’ satisfaction with health care services. PMID:23198824
Lei, Ping; Jolibert, Alain
2012-11-30
Previous research has addressed the relationship between customer satisfaction, perceived quality and customer loyalty intentions in consumer markets. In this study, we test and compare three theoretical models of the quality-satisfaction-loyalty relationship in the Chinese healthcare system. This research focuses on hospital patients as participants in the process of healthcare procurement. Empirical data were obtained from six Chinese public hospitals in Shanghai. A total of 630 questionnaires were collected in two studies. Study 1 tested the research instruments, and Study 2 tested the three models. Confirmatory factor analysis was used to assess the scales' construct validity by testing convergent and discriminant validity. A structural equation model (SEM) specified the distinctions between each construct. A comparison of the three theoretical models was conducted via AMOS analysis. The results of the SEM demonstrate that quality and satisfaction are distinct concepts and that the first model (satisfaction mediates quality and loyalty) is the most appropriate one in the context of the Chinese healthcare environment. In this study, we test and compare three theoretical models of the quality-satisfaction-loyalty relationship in the Chinese healthcare system. Findings show that perceived quality improvement does not lead directly to customer loyalty. The strategy of using quality improvement to maintain patient loyalty depends on the level of patient satisfaction. This implies that the measurement of patient experiences should include topics of importance for patients' satisfaction with health care services.
A Preliminary Study for a New Model of Sense of Community
ERIC Educational Resources Information Center
Tartaglia, Stefano
2006-01-01
Although Sense of Community (SOC) is usually defined as a multidimensional construct, most SOC scales are unidimensional. To reduce the split between theory and empirical research, the present work identifies a multifactor structure for the Italian Sense of Community Scale (ISCS) that has already been validated as a unitary index of SOC. This…
ERIC Educational Resources Information Center
Kuhl, Julius
1978-01-01
A formal elaboration of the original theory of achievement motivation (Atkinson, 1957; Atkinson & Feather, 1966) is proposed that includes personal standards as determinants of motivational tendencies. The results of an experiment are reported that examines the validity of some of the implications of the elaborated model proposed here. (Author/RK)
ERIC Educational Resources Information Center
Yeo, Lay See; Pfeiffer, Steven I.
2018-01-01
Gifted education (GE) in Singapore is entering its third decade. However, local research into the gifted is typically undertaken by graduate students and left as unpublished data. Internationally, there is also very little if any research on counseling models that have been empirically validated for use with gifted children irrespective of their…
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts. PMID:28979228
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts.
Man-machine analysis of translation and work tasks of Skylab films
NASA Technical Reports Server (NTRS)
Hosler, W. W.; Boelter, J. G.; Morrow, J. R., Jr.; Jackson, J. T.
1979-01-01
An objective approach to determine the concurrent validity of computer-graphic models is real time film analysis. This technique was illustrated through the procedures and results obtained in an evaluation of translation of Skylab mission astronauts. The quantitative analysis was facilitated by the use of an electronic film analyzer, minicomputer, and specifically supportive software. The uses of this technique for human factors research are: (1) validation of theoretical operator models; (2) biokinetic analysis; (3) objective data evaluation; (4) dynamic anthropometry; (5) empirical time-line analysis; and (6) consideration of human variability. Computer assisted techniques for interface design and evaluation have the potential for improving the capability for human factors engineering.
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Revision of empirical electric field modeling in the inner magnetosphere using Cluster data
NASA Astrophysics Data System (ADS)
Matsui, H.; Torbert, R. B.; Spence, H. E.; Khotyaintsev, Yu. V.; Lindqvist, P.-A.
2013-07-01
Using Cluster data from the Electron Drift (EDI) and the Electric Field and Wave (EFW) instruments, we revise our empirically-based, inner-magnetospheric electric field (UNH-IMEF) model at 2
NASA Astrophysics Data System (ADS)
Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg
2010-06-01
Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.
Using face validity to recognize empirical community observations.
Gaber, John; Gaber, Sharon L
2010-05-01
There is a growing interest among international planning scholars to explore community participation in the plan making process from a qualitative research approach. In this paper the research assessment tool "face validity" is discussed as one way to help planners decipher when the community is sharing empirically grounded observations that can advance the applicability of the plan making process. Face validity provides a common sense assessment of research conclusions. It allows the assessor to look at an entire research project and ask: "on the face of things, does this research make sense?" With planners listening to citizen comments with an ear for face validity observations, holds open the opportunity for government to empirically learn from the community to see if they "got it right." And if not, to chart out a course on how they can get it right. Copyright 2009 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Hill, Jill S.; Robbins, Rockey R.; Pace, Terry M.
2012-01-01
This article critically reviews empirical correlates of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989), based on several validation studies conducted with different racial, ethnic, and cultural groups. A major critique of the reviewed MMPI-2 studies was focused on the use of…
ERIC Educational Resources Information Center
Afzal, Waseem
2017-01-01
Introduction: The purpose of this paper is to propose a methodology to conceptualize, operationalize, and empirically validate the concept of information need. Method: The proposed methodology makes use of both qualitative and quantitative perspectives, and includes a broad array of approaches such as literature reviews, expert opinions, focus…
Critical evaluation of mechanistic two-phase flow pipeline and well simulation models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dhulesia, H.; Lopez, D.
1996-12-31
Mechanistic steady state simulation models, rather than empirical correlations, are used for a design of multiphase production system including well, pipeline and downstream installations. Among the available models, PEPITE, WELLSIM, OLGA, TACITE and TUFFP are widely used for this purpose and consequently, a critical evaluation of these models is needed. An extensive validation methodology is proposed which consists of two distinct steps: first to validate the hydrodynamic point model using the test loop data and, then to validate the over-all simulation model using the real pipelines and wells data. The test loop databank used in this analysis contains about 5952more » data sets originated from four different test loops and a majority of these data are obtained at high pressures (up to 90 bars) with real hydrocarbon fluids. Before performing the model evaluation, physical analysis of the test loops data is required to eliminate non-coherent data. The evaluation of these point models demonstrates that the TACITE and OLGA models can be applied to any configuration of pipes. The TACITE model performs better than the OLGA model because it uses the most appropriate closure laws from the literature validated on a large number of data. The comparison of predicted and measured pressure drop for various real pipelines and wells demonstrates that the TACITE model is a reliable tool.« less
Gravity wave control on ESF day-to-day variability: An empirical approach
NASA Astrophysics Data System (ADS)
Aswathy, R. P.; Manju, G.
2017-06-01
The gravity wave control on the daily variation in nighttime ionization irregularity occurrence is studied using ionosonde data for the period 2002-2007 at magnetic equatorial location Trivandrum. Recent studies during low solar activity period have revealed that the seed perturbations should have the threshold amplitude required to trigger equatorial spread F (ESF), at a particular altitude and that this threshold amplitude undergoes seasonal and solar cycle changes. In the present study, the altitude variation of the threshold seed perturbations is examined for autumnal equinox of different years. Thereafter, a unique empirical model, incorporating the electrodynamical effects and the gravity wave modulation, is developed. Using the model the threshold curve for autumnal equinox season of any year may be delineated if the solar flux index (F10.7) is known. The empirical model is validated using the data for high, moderate, and low solar epochs in 2001, 2004, and 1995, respectively. This model has the potential to be developed further, to forecast ESF incidence, if the base height of ionosphere is in the altitude region where electrodynamics controls the occurrence of ESF. ESF irregularities are harmful for communication and navigation systems, and therefore, research is ongoing globally to predict them. In this context, this study is crucial for evolving a methodology to predict communication as well as navigation outages.
Gyenge, Christina C; Bowen, Bruce D; Reed, Rolf K; Bert, Joel L
2003-02-01
This study is concerned with the formulation of a 'kidney module' linked to the plasma compartment of a larger mathematical model previously developed. Combined, these models can be used to predict, amongst other things, fluid and small ion excretion rates by the kidney; information that should prove useful in evaluating values and trends related to whole-body fluid balance for different clinical conditions to establish fluid administration protocols and for educational purposes. The renal module assumes first-order, negative-feedback responses of the kidney to changes in plasma volume and/or plasma sodium content from their normal physiological set points. Direct hormonal influences are not explicitly formulated in this empiric model. The model also considers that the renal excretion rates of small ions other than sodium are proportional to the excretion rate of sodium. As part of the model development two aspects are emphasized (1): the estimation of parameters related to the renal elimination of fluid and small ions, and (2) model validation via comparisons between the model predictions and selected experimental data. For validation, model predictions of the renal dynamics are compared with new experimental data for two cases: plasma overload resulting from external fluid infusion (e.g. infusions of iso-osmolar solutions and/or hypertonic/hyperoncotic saline solutions), and untreated hypo volemic conditions that result from the external loss of blood. The present study demonstrates that the empiric kidney module presented above can provide good short-term predictions with respect to all renal outputs considered here. Physiological implications of the model are also presented. Copyright Acta Anaesthesiologica Scandinavica 47 (2003)
Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model
CULLEY, JOAN M.
2012-01-01
Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283
Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.
Culley, Joan M
2011-05-01
Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.
From primed construct to motivated behavior: validation processes in goal pursuit.
Demarree, Kenneth G; Loersch, Chris; Briñol, Pablo; Petty, Richard E; Payne, B Keith; Rucker, Derek D
2012-12-01
Past research has found that primes can automatically initiate unconscious goal striving. Recent models of priming have suggested that this effect can be moderated by validation processes. According to a goal-validation perspective, primes should cause changes in one's motivational state to the extent people have confidence in the prime-related mental content. Across three experiments, we provided the first direct empirical evidence for this goal-validation account. Using a variety of goal priming manipulations (cooperation vs. competition, achievement, and self-improvement vs. saving money) and validity inductions (power, ease, and writing about confidence), we demonstrated that the impact of goal primes on behavior occurs to a greater extent when conditions foster confidence (vs. doubt) in mental contents. Indeed, when conditions foster doubt, goal priming effects are eliminated or counter to the implications of the prime. The implications of these findings for research on goal priming and validation processes are discussed.
NASA Astrophysics Data System (ADS)
Oikawa, P. Y.; Jenerette, G. D.; Knox, S. H.; Sturtevant, C.; Verfaillie, J.; Dronova, I.; Poindexter, C. M.; Eichelmann, E.; Baldocchi, D. D.
2017-01-01
Wetlands and flooded peatlands can sequester large amounts of carbon (C) and have high greenhouse gas mitigation potential. There is growing interest in financing wetland restoration using C markets; however, this requires careful accounting of both CO2 and CH4 exchange at the ecosystem scale. Here we present a new model, the PEPRMT model (Peatland Ecosystem Photosynthesis Respiration and Methane Transport), which consists of a hierarchy of biogeochemical models designed to estimate CO2 and CH4 exchange in restored managed wetlands. Empirical models using temperature and/or photosynthesis to predict respiration and CH4 production were contrasted with a more process-based model that simulated substrate-limited respiration and CH4 production using multiple carbon pools. Models were parameterized by using a model-data fusion approach with multiple years of eddy covariance data collected in a recently restored wetland and a mature restored wetland. A third recently restored wetland site was used for model validation. During model validation, the process-based model explained 70% of the variance in net ecosystem exchange of CO2 (NEE) and 50% of the variance in CH4 exchange. Not accounting for high respiration following restoration led to empirical models overestimating annual NEE by 33-51%. By employing a model-data fusion approach we provide rigorous estimates of uncertainty in model predictions, accounting for uncertainty in data, model parameters, and model structure. The PEPRMT model is a valuable tool for understanding carbon cycling in restored wetlands and for application in carbon market-funded wetland restoration, thereby advancing opportunity to counteract the vast degradation of wetlands and flooded peatlands.
Towards policy relevant environmental modeling: contextual validity and pragmatic models
Miles, Scott B.
2000-01-01
"What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead of promoting passive or self-righteous decisions.
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-30
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
NASA Astrophysics Data System (ADS)
Kanazawa, Kiyoshi; Sueshige, Takumi; Takayasu, Hideki; Takayasu, Misako
2018-03-01
A microscopic model is established for financial Brownian motion from the direct observation of the dynamics of high-frequency traders (HFTs) in a foreign exchange market. Furthermore, a theoretical framework parallel to molecular kinetic theory is developed for the systematic description of the financial market from microscopic dynamics of HFTs. We report first on a microscopic empirical law of traders' trend-following behavior by tracking the trajectories of all individuals, which quantifies the collective motion of HFTs but has not been captured in conventional order-book models. We next introduce the corresponding microscopic model of HFTs and present its theoretical solution paralleling molecular kinetic theory: Boltzmann-like and Langevin-like equations are derived from the microscopic dynamics via the Bogoliubov-Born-Green-Kirkwood-Yvon hierarchy. Our model is the first microscopic model that has been directly validated through data analysis of the microscopic dynamics, exhibiting quantitative agreements with mesoscopic and macroscopic empirical results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Im, Piljae; Bhandari, Mahabir S.; New, Joshua Ryan
This document describes the Oak Ridge National Laboratory (ORNL) multiyear experimental plan for validation and uncertainty characterization of whole-building energy simulation for a multi-zone research facility using a traditional rooftop unit (RTU) as a baseline heating, ventilating, and air conditioning (HVAC) system. The project’s overarching objective is to increase the accuracy of energy simulation tools by enabling empirical validation of key inputs and algorithms. Doing so is required to inform the design of increasingly integrated building systems and to enable accountability for performance gaps between design and operation of a building. The project will produce documented data sets that canmore » be used to validate key functionality in different energy simulation tools and to identify errors and inadequate assumptions in simulation engines so that developers can correct them. ASHRAE Standard 140, Method of Test for the Evaluation of Building Energy Analysis Computer Programs (ASHRAE 2004), currently consists primarily of tests to compare different simulation programs with one another. This project will generate sets of measured data to enable empirical validation, incorporate these test data sets in an extended version of Standard 140, and apply these tests to the Department of Energy’s (DOE) EnergyPlus software (EnergyPlus 2016) to initiate the correction of any significant deficiencies. The fitness-for-purpose of the key algorithms in EnergyPlus will be established and demonstrated, and vendors of other simulation programs will be able to demonstrate the validity of their products. The data set will be equally applicable to validation of other simulation engines as well.« less
Styron, J D; Cooper, G W; Ruiz, C L; Hahn, K D; Chandler, G A; Nelson, A J; Torres, J A; McWatters, B R; Carpenter, Ken; Bonura, M A
2014-11-01
A methodology for obtaining empirical curves relating absolute measured scintillation light output to beta energy deposited is presented. Output signals were measured from thin plastic scintillator using NIST traceable beta and gamma sources and MCNP5 was used to model the energy deposition from each source. Combining the experimental and calculated results gives the desired empirical relationships. To validate, the sensitivity of a beryllium/scintillator-layer neutron activation detector was predicted and then exposed to a known neutron fluence from a Deuterium-Deuterium fusion plasma (DD). The predicted and the measured sensitivity were in statistical agreement.
How to reach linguistic consensus: a proof of convergence for the naming game.
De Vylder, Bart; Tuyls, Karl
2006-10-21
In this paper we introduce a mathematical model of naming games. Naming games have been widely used within research on the origins and evolution of language. Despite the many interesting empirical results these studies have produced, most of this research lacks a formal elucidating theory. In this paper we show how a population of agents can reach linguistic consensus, i.e. learn to use one common language to communicate with one another. Our approach differs from existing formal work in two important ways: one, we relax the too strong assumption that an agent samples infinitely often during each time interval. This assumption is usually made to guarantee convergence of an empirical learning process to a deterministic dynamical system. Two, we provide a proof that under these new realistic conditions, our model converges to a common language for the entire population of agents. Finally the model is experimentally validated.
A model of rotationally-sampled wind turbulence for predicting fatigue loads in wind turbines
NASA Technical Reports Server (NTRS)
Spera, David A.
1995-01-01
Empirical equations are presented with which to model rotationally-sampled (R-S) turbulence for input to structural-dynamic computer codes and the calculation of wind turbine fatigue loads. These equations are derived from R-S turbulence data which were measured at the vertical-plane array in Clayton, New Mexico. For validation, the equations are applied to the calculation of cyclic flapwise blade loads for the NASA/DOE Mod-2 2.5-MW experimental HAWT's (horizontal-axis wind turbines), and the results compared to measured cyclic loads. Good correlation is achieved, indicating that the R-S turbulence model developed in this study contains the characteristics of the wind which produce many of the fatigue loads sustained by wind turbines. Empirical factors are included which permit the prediction of load levels at specified percentiles of occurrence, which is required for the generation of fatigue load spectra and the prediction of the fatigue lifetime of structures.
Empirical flow parameters : a tool for hydraulic model validity
Asquith, William H.; Burley, Thomas E.; Cleveland, Theodore G.
2013-01-01
The objectives of this project were (1) To determine and present from existing data in Texas, relations between observed stream flow, topographic slope, mean section velocity, and other hydraulic factors, to produce charts such as Figure 1 and to produce empirical distributions of the various flow parameters to provide a methodology to "check if model results are way off!"; (2) To produce a statistical regional tool to estimate mean velocity or other selected parameters for storm flows or other conditional discharges at ungauged locations (most bridge crossings) in Texas to provide a secondary way to compare such values to a conventional hydraulic modeling approach. (3.) To present ancillary values such as Froude number, stream power, Rosgen channel classification, sinuosity, and other selected characteristics (readily determinable from existing data) to provide additional information to engineers concerned with the hydraulic-soil-foundation component of transportation infrastructure.
Overview of physical models of liquid entrainment in annular gas-liquid flow
NASA Astrophysics Data System (ADS)
Cherdantsev, Andrey V.
2018-03-01
A number of recent papers devoted to development of physically-based models for prediction of liquid entrainment in annular regime of two-phase flow are analyzed. In these models shearing-off the crests of disturbance waves by the gas drag force is supposed to be the physical mechanism of entrainment phenomenon. The models are based on a number of assumptions on wavy structure, including inception of disturbance waves due to Kelvin-Helmholtz instability, linear velocity profile inside liquid film and high degree of three-dimensionality of disturbance waves. Validity of the assumptions is analyzed by comparison to modern experimental observations. It was shown that nearly every assumption is in strong qualitative and quantitative disagreement with experiments, which leads to massive discrepancies between the modeled and real properties of the disturbance waves. As a result, such models over-predict the entrained fraction by several orders of magnitude. The discrepancy is usually reduced using various kinds of empirical corrections. This, combined with empiricism already included in the models, turns the models into another kind of empirical correlations rather than physically-based models.
[Construction of competency model of 'excellent doctor' in Chinese medicine].
Jin, Aning; Tian, Yongquan; Zhao, Taiyang
2014-05-01
To evaluate outstanding and ordinary persons from personal characteristics using competency as the important criteria, which is the future direction of medical education reform. We carried on a behavior event interview about famous doctors of old traditional Chinese medicine, compiled competency dictionary, proceed control prediction test. SPSS and AMOS were used to be data analysis tools on statistics. We adopted the model of peer assessment and contrast to carry out empirical research. This project has carried on exploratory factor analysis and confirmatory factor analysis, established a "5A" competency model which include moral ability, thinking ability, communication ability, learning and practical ability. Competency model of "excellent doctor" in Chinese medicine has been validated, with good reliability and validity, and embodies the characteristics of traditional Chinese medicine personnel training, with theoretical and practical significance for excellence in medicine physician training.
2012-01-01
The optimization processes of photo degradation are complicated and expensive when it is performed with traditional methods such as one variable at a time. In this research, the condition of ortho-cresol (o-cresol) photo degradation was optimized by using a semi empirical method. First of all, the experiments were designed with four effective factors including irradiation time, pH, photo catalyst’s amount, o-cresol concentration and photo degradation % as response by response surface methodology (RSM). The RSM used central composite design (CCD) method consists of 30 runs to obtain the actual responses. The actual responses were fitted with the second order algebraic polynomial equation to select a model (suggested model). The suggested model was validated by a few numbers of excellent statistical evidences in analysis of variance (ANOVA). The used evidences include high F-value (143.12), very low P-value (<0.0001), non-significant lack of fit, the determination coefficient (R2 = 0.99) and the adequate precision (47.067). To visualize the optimum, the validated model simulated the condition of variables and response (photo degradation %) be using a few number of three dimensional plots (3D). To confirm the model, the optimums were performed in laboratory. The results of performed experiments were quite close to the predicted values. In conclusion, the study indicated that the model is successful to simulate the optimum condition of o-cresol photo degradation under visible-light irradiation by manganese doped ZnO nanoparticles. PMID:22909072
Reexamining competitive priorities: Empirical study in service sector
NASA Astrophysics Data System (ADS)
Idris, Fazli; Mohammad, Jihad
2015-02-01
The general objective of this study is to validate the multi-level concept of competitive priorities using reflective-formative model at a higher order for service industries. An empirical study of 228 firms from 9 different service industries is conducted to answer the objective of this study. Partial least square analysis with SmartPLS 2.0 was used to perform the analysis. Finding revealed six priorities: cost, flexibility, delivery, quality talent management, quality tangibility, and innovativeness. It emerges that quality are expanded into two types; one is related to managing talent for process improvement and the second one is the physical appearance and tangibility of the service quality. This study has confirmed competitive priorities as formative second-order hierarchical latent construct by using rigorous empirical evidence. Implications, limitation and suggestion for future research are accordingly discussed in this paper.
Testability of evolutionary game dynamics based on experimental economics data
NASA Astrophysics Data System (ADS)
Wang, Yijia; Chen, Xiaojie; Wang, Zhijian
In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).
Modelling seagrass growth and development to evaluate transplanting strategies for restoration.
Renton, Michael; Airey, Michael; Cambridge, Marion L; Kendrick, Gary A
2011-10-01
Seagrasses are important marine plants that are under threat globally. Restoration by transplanting vegetative fragments or seedlings into areas where seagrasses have been lost is possible, but long-term trial data are limited. The goal of this study is to use available short-term data to predict long-term outcomes of transplanting seagrass. A functional-structural plant model of seagrass growth that integrates data collected from short-term trials and experiments is presented. The model was parameterized for the species Posidonia australis, a limited validation of the model against independent data and a sensitivity analysis were conducted and the model was used to conduct a preliminary evaluation of different transplanting strategies. The limited validation was successful, and reasonable long-term outcomes could be predicted, based only on short-term data. This approach for modelling seagrass growth and development enables long-term predictions of the outcomes to be made from different strategies for transplanting seagrass, even when empirical long-term data are difficult or impossible to collect. More validation is required to improve confidence in the model's predictions, and inclusion of more mechanism will extend the model's usefulness. Marine restoration represents a novel application of functional-structural plant modelling.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
1982-04-01
34 ’Institute for Social Research LEADERSHIP AND MANAGEMENT TECHNICAL AREA DTIC ~EL ECT-ICI /AUG 2 1983 U. S. Army D C3 Research Institute for the Behavioral and...about or aIlowing changes within the organizatin itself (Georgopoulos, 1972; Georgopoulos & Tannenbaum, 1957). As open systems, organizations and their...in school organizations. Unpublished manuscript, The University of Michigan, 1980. Selznick, P. Leadership in administration. Evanston, Ill.: Row
Development and construct validity of the Classroom Strategies Scale-Observer Form.
Reddy, Linda A; Fabiano, Gregory; Dudek, Christopher M; Hsu, Louis
2013-12-01
Research on progress monitoring has almost exclusively focused on student behavior and not on teacher practices. This article presents the development and validation of a new teacher observational assessment (Classroom Strategies Scale) of classroom instructional and behavioral management practices. The theoretical underpinnings and empirical basis for the instructional and behavioral management scales are presented. The Classroom Strategies Scale (CSS) evidenced overall good reliability estimates including internal consistency, interrater reliability, test-retest reliability, and freedom from item bias on important teacher demographics (age, educational degree, years of teaching experience). Confirmatory factor analyses (CFAs) of CSS data from 317 classrooms were carried out to assess the level of empirical support for (a) a 4 first-order factor theory concerning teachers' instructional practices, and (b) a 4 first-order factor theory concerning teachers' behavior management practice. Several fit indices indicated acceptable fit of the (a) and (b) CFA models to the data, as well as acceptable fit of less parsimonious alternative CFA models that included 1 or 2 second-order factors. Information-theory-based indices generally suggested that the (a) and (b) CFA models fit better than some more parsimonious alternative CFA models that included constraints on relations of first-order factors. Overall, CFA first-order and higher order factor results support the CSS-Observer Total, Composite, and subscales. Suggestions for future measurement development efforts are outlined. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Validity test and its consistency in the construction of patient loyalty model
NASA Astrophysics Data System (ADS)
Yanuar, Ferra
2016-04-01
The main objective of this present study is to demonstrate the estimation of validity values and its consistency based on structural equation model. The method of estimation was then implemented to an empirical data in case of the construction the patient loyalty model. In the hypothesis model, service quality, patient satisfaction and patient loyalty were determined simultaneously, each factor were measured by any indicator variables. The respondents involved in this study were the patients who ever got healthcare at Puskesmas in Padang, West Sumatera. All 394 respondents who had complete information were included in the analysis. This study found that each construct; service quality, patient satisfaction and patient loyalty were valid. It means that all hypothesized indicator variables were significant to measure their corresponding latent variable. Service quality is the most measured by tangible, patient satisfaction is the most mesured by satisfied on service and patient loyalty is the most measured by good service quality. Meanwhile in structural equation, this study found that patient loyalty was affected by patient satisfaction positively and directly. Service quality affected patient loyalty indirectly with patient satisfaction as mediator variable between both latent variables. Both structural equations were also valid. This study also proved that validity values which obtained here were also consistence based on simulation study using bootstrap approach.
Binquet, C; Abrahamowicz, M; Mahboubi, A; Jooste, V; Faivre, J; Bonithon-Kopp, C; Quantin, C
2008-12-30
Flexible survival models, which avoid assumptions about hazards proportionality (PH) or linearity of continuous covariates effects, bring the issues of model selection to a new level of complexity. Each 'candidate covariate' requires inter-dependent decisions regarding (i) its inclusion in the model, and representation of its effects on the log hazard as (ii) either constant over time or time-dependent (TD) and, for continuous covariates, (iii) either loglinear or non-loglinear (NL). Moreover, 'optimal' decisions for one covariate depend on the decisions regarding others. Thus, some efficient model-building strategy is necessary.We carried out an empirical study of the impact of the model selection strategy on the estimates obtained in flexible multivariable survival analyses of prognostic factors for mortality in 273 gastric cancer patients. We used 10 different strategies to select alternative multivariable parametric as well as spline-based models, allowing flexible modeling of non-parametric (TD and/or NL) effects. We employed 5-fold cross-validation to compare the predictive ability of alternative models.All flexible models indicated significant non-linearity and changes over time in the effect of age at diagnosis. Conventional 'parametric' models suggested the lack of period effect, whereas more flexible strategies indicated a significant NL effect. Cross-validation confirmed that flexible models predicted better mortality. The resulting differences in the 'final model' selected by various strategies had also impact on the risk prediction for individual subjects.Overall, our analyses underline (a) the importance of accounting for significant non-parametric effects of covariates and (b) the need for developing accurate model selection strategies for flexible survival analyses. Copyright 2008 John Wiley & Sons, Ltd.
Estimation of value at risk and conditional value at risk using normal mixture distributions model
NASA Astrophysics Data System (ADS)
Kamaruzzaman, Zetty Ain; Isa, Zaidi
2013-04-01
Normal mixture distributions model has been successfully applied in financial time series analysis. In this paper, we estimate the return distribution, value at risk (VaR) and conditional value at risk (CVaR) for monthly and weekly rates of returns for FTSE Bursa Malaysia Kuala Lumpur Composite Index (FBMKLCI) from July 1990 until July 2010 using the two component univariate normal mixture distributions model. First, we present the application of normal mixture distributions model in empirical finance where we fit our real data. Second, we present the application of normal mixture distributions model in risk analysis where we apply the normal mixture distributions model to evaluate the value at risk (VaR) and conditional value at risk (CVaR) with model validation for both risk measures. The empirical results provide evidence that using the two components normal mixture distributions model can fit the data well and can perform better in estimating value at risk (VaR) and conditional value at risk (CVaR) where it can capture the stylized facts of non-normality and leptokurtosis in returns distribution.
ERIC Educational Resources Information Center
Kamen, David G.
2009-01-01
Non-suicidal self-injury (NSSI) in children and adolescents is a major public health problem. Fortunately, we can apply functional analysis, in conjunction with empirically validated NSSI assessment measurements, to precisely evaluate the biopsychosocial risk factors and reinforcements that contextualize NSSI. Empirically validated behavioral…
ERIC Educational Resources Information Center
Tarhini, Ali; Hassouna, Mohammad; Abbasi, Muhammad Sharif; Orozco, Jorge
2015-01-01
Simpler is better. There are a lot of "needs" in e-Learning, and there's often a limit to the time, talent, and money that can be thrown at them individually. Contemporary pedagogy in technology and engineering disciplines, within the higher education context, champion instructional designs that emphasize peer instruction and rich…
Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket
Thomas A. Black; Charles H. Luce
2013-01-01
A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...
ERIC Educational Resources Information Center
Gorman, Dennis M.; Conde, Eugenia
2007-01-01
Conflict of interest refers to a set of conditions in which professional judgment concerning the validity of research might be influenced by a secondary competing interest. The competing interest that has received most attention in the literature addressing the prevalence and effects of such conflicts on the practice of empirical research has been…
Lichtenberg, Peter A; Ocepek-Welikson, Katja; Ficker, Lisa J; Gross, Evan; Rahman-Filipiak, Analise; Teresi, Jeanne A
2018-01-01
The objectives of this study were threefold: (1) to empirically test the conceptual model proposed by the Lichtenberg Financial Decision-making Rating Scale (LFDRS); (2) to examine the psychometric properties of the LFDRS contextual factors in financial decision-making by investigating both the reliability and convergent validity of the subscales and total scale, and (3) extending previous work on the scale through the collection of normative data on financial decision-making. A convenience sample of 200 independent function and community dwelling older adults underwent cognitive and financial management testing and were interviewed using the LFDRS. Confirmatory factor analysis, internal consistency measures, and hierarchical regression were used in a sample of 200 community-dwelling older adults, all of whom were making or had recently made a significant financial decision. Results confirmed the scale's reliability and supported the conceptual model. Convergent validity analyses indicate that as hypothesized, cognition is a significant predictor of risk scores. Financial management scores, however, were not predictive of decision-making risk scores. The psychometric properties of the LFDRS support the scale's use as it was proposed. The LFDRS instructions and scale are provided for clinicians to use in financial capacity assessments.
Predicting speech intelligibility in noise for hearing-critical jobs
NASA Astrophysics Data System (ADS)
Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian
2003-10-01
Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.
Refining and validating a conceptual model of Clinical Nurse Leader integrated care delivery.
Bender, Miriam; Williams, Marjory; Su, Wei; Hites, Lisle
2017-02-01
To empirically validate a conceptual model of Clinical Nurse Leader integrated care delivery. There is limited evidence of frontline care delivery models that consistently achieve quality patient outcomes. Clinical Nurse Leader integrated care delivery is a promising nursing model with a growing record of success. However, theoretical clarity is necessary to generate causal evidence of effectiveness. Sequential mixed methods. A preliminary Clinical Nurse Leader practice model was refined and survey items developed to correspond with model domains, using focus groups and a Delphi process with a multi-professional expert panel. The survey was administered in 2015 to clinicians and administrators involved in Clinical Nurse Leader initiatives. Confirmatory factor analysis and structural equation modelling were used to validate the measurement and model structure. Final sample n = 518. The model incorporates 13 components organized into five conceptual domains: 'Readiness for Clinical Nurse Leader integrated care delivery'; 'Structuring Clinical Nurse Leader integrated care delivery'; 'Clinical Nurse Leader Practice: Continuous Clinical Leadership'; 'Outcomes of Clinical Nurse Leader integrated care delivery'; and 'Value'. Sample data had good fit with specified model and two-level measurement structure. All hypothesized pathways were significant, with strong coefficients suggesting good fit between theorized and observed path relationships. The validated model articulates an explanatory pathway of Clinical Nurse Leader integrated care delivery, including Clinical Nurse Leader practices that result in improved care dynamics and patient outcomes. The validated model provides a basis for testing in practice to generate evidence that can be deployed across the healthcare spectrum. © 2016 John Wiley & Sons Ltd.
Empirical Model of Precipitating Ion Oval
NASA Astrophysics Data System (ADS)
Goldstein, Jerry
2017-10-01
In this brief technical report published maps of ion integral flux are used to constrain an empirical model of the precipitating ion oval. The ion oval is modeled as a Gaussian function of ionospheric latitude that depends on local time and the Kp geomagnetic index. The three parameters defining this function are the centroid latitude, width, and amplitude. The local time dependences of these three parameters are approximated by Fourier series expansions whose coefficients are constrained by the published ion maps. The Kp dependence of each coefficient is modeled by a linear fit. Optimization of the number of terms in the expansion is achieved via minimization of the global standard deviation between the model and the published ion map at each Kp. The empirical model is valid near the peak flux of the auroral oval; inside its centroid region the model reproduces the published ion maps with standard deviations of less than 5% of the peak integral flux. On the subglobal scale, average local errors (measured as a fraction of the point-to-point integral flux) are below 30% in the centroid region. Outside its centroid region the model deviates significantly from the H89 integral flux maps. The model's performance is assessed by comparing it with both local and global data from a 17 April 2002 substorm event. The model can reproduce important features of the macroscale auroral region but none of its subglobal structure, and not immediately following a substorm.
The Elegance of Disordered Granular Packings: A Validation of Edwards' Hypothesis
NASA Technical Reports Server (NTRS)
Metzger, Philip T.; Donahue, Carly M.
2004-01-01
We have found a way to analyze Edwards' density of states for static granular packings in the special case of round, rigid, frictionless grains assuming constant coordination number. It obtains the most entropic density of single grain states, which predicts several observables including the distribution of contact forces. We compare these results against empirical data obtained in dynamic simulations of granular packings. The agreement between theory and the empirics is quite good, helping validate the use of statistical mechanics methods in granular physics. The differences between theory and empirics are mainly due to the variable coordination number, and when the empirical data are sorted by that number we obtain several insights that suggest an underlying elegance in the density of states
Hsu, Kean; Iwamoto, Derek Kenji
2014-01-01
The Conformity to Masculine Norms Inventory (CMNI; Mahalik et al., 2003) and revised CMNI-46 (Parent & Moradi, 2009) have received a great deal of empirical attention and support for their strong psychometric properties and evidence of construct validity. However, one important area that remains unexplored is how adherence to these masculine norms may vary across race and ethnicity. The current investigation examines the possible racial measurement noninvariance in the CMNI-46 among Asian American and White American college students (N = 893). The results revealed significant measurement differences across groups; specifically, the CMNI-46 was more theoretically consistent for the White American men than the Asian American men. Through exploratory and multigroup confirmatory factor analysis, an 8-factor, 29-item version of the CMNI emerged, displaying an excellent overall model fit for both racial groups. This study provides strong evidence for the use of a streamlined 29-item version of the CMNI, validated with Asian American and White American men. The findings also lend further empirical and psychometric evidence regarding the variance of masculine norms among ethnic groups as well as the variance of the multidimensional construct of masculinity. PMID:25530724
Hsu, Kean; Iwamoto, Derek Kenji
2014-10-01
The Conformity to Masculine Norms Inventory (CMNI; Mahalik et al., 2003) and revised CMNI-46 (Parent & Moradi, 2009) have received a great deal of empirical attention and support for their strong psychometric properties and evidence of construct validity. However, one important area that remains unexplored is how adherence to these masculine norms may vary across race and ethnicity. The current investigation examines the possible racial measurement noninvariance in the CMNI-46 among Asian American and White American college students ( N = 893). The results revealed significant measurement differences across groups; specifically, the CMNI-46 was more theoretically consistent for the White American men than the Asian American men. Through exploratory and multigroup confirmatory factor analysis, an 8-factor, 29-item version of the CMNI emerged, displaying an excellent overall model fit for both racial groups. This study provides strong evidence for the use of a streamlined 29-item version of the CMNI, validated with Asian American and White American men. The findings also lend further empirical and psychometric evidence regarding the variance of masculine norms among ethnic groups as well as the variance of the multidimensional construct of masculinity.
Dogan, Eyup; Seker, Fahri
2016-07-01
This empirical study analyzes the impacts of real income, energy consumption, financial development and trade openness on CO2 emissions for the OECD countries in the Environmental Kuznets Curve (EKC) model by using panel econometric approaches that consider issues of heterogeneity and cross-sectional dependence. Results from the Pesaran CD test, the Pesaran-Yamagata's homogeneity test, the CADF and the CIPS unit root tests, the LM bootstrap cointegration test, the DSUR estimator, and the Emirmahmutoglu-Kose Granger causality test indicate that (i) the panel time-series data are heterogeneous and cross-sectionally dependent; (ii) CO2 emissions, real income, the quadratic income, energy consumption, financial development and openness are integrated of order one; (iii) the analyzed data are cointegrated; (iv) the EKC hypothesis is validated for the OECD countries; (v) increases in openness and financial development mitigate the level of emissions whereas energy consumption contributes to carbon emissions; (vi) a variety of Granger causal relationship is detected among the analyzed variables; and (vii) empirical results and policy recommendations are accurate and efficient since panel econometric models used in this study account for heterogeneity and cross-sectional dependence in their estimation procedures.
Factor complexity of crash occurrence: An empirical demonstration using boosted regression trees.
Chung, Yi-Shih
2013-12-01
Factor complexity is a characteristic of traffic crashes. This paper proposes a novel method, namely boosted regression trees (BRT), to investigate the complex and nonlinear relationships in high-variance traffic crash data. The Taiwanese 2004-2005 single-vehicle motorcycle crash data are used to demonstrate the utility of BRT. Traditional logistic regression and classification and regression tree (CART) models are also used to compare their estimation results and external validities. Both the in-sample cross-validation and out-of-sample validation results show that an increase in tree complexity provides improved, although declining, classification performance, indicating a limited factor complexity of single-vehicle motorcycle crashes. The effects of crucial variables including geographical, time, and sociodemographic factors explain some fatal crashes. Relatively unique fatal crashes are better approximated by interactive terms, especially combinations of behavioral factors. BRT models generally provide improved transferability than conventional logistic regression and CART models. This study also discusses the implications of the results for devising safety policies. Copyright © 2012 Elsevier Ltd. All rights reserved.
Scaling field data to calibrate and validate moderate spatial resolution remote sensing models
Baccini, A.; Friedl, M.A.; Woodcock, C.E.; Zhu, Z.
2007-01-01
Validation and calibration are essential components of nearly all remote sensing-based studies. In both cases, ground measurements are collected and then related to the remote sensing observations or model results. In many situations, and particularly in studies that use moderate resolution remote sensing, a mismatch exists between the sensor's field of view and the scale at which in situ measurements are collected. The use of in situ measurements for model calibration and validation, therefore, requires a robust and defensible method to spatially aggregate ground measurements to the scale at which the remotely sensed data are acquired. This paper examines this challenge and specifically considers two different approaches for aggregating field measurements to match the spatial resolution of moderate spatial resolution remote sensing data: (a) landscape stratification; and (b) averaging of fine spatial resolution maps. The results show that an empirically estimated stratification based on a regression tree method provides a statistically defensible and operational basis for performing this type of procedure.
Katz, Andrea C; Hee, Danelle; Hooker, Christine I; Shankman, Stewart A
2017-10-03
In Section III of the DSM-5, the American Psychiatric Association (APA) proposes a pathological personality trait model of personality disorders. The recommended assessment instrument is the Personality Inventory for the DSM-5 (PID-5), an empirically derived scale that assesses personality pathology along five domains and 25 facets. Although the PID-5 demonstrates strong convergent validity with other personality measures, no study has examined whether it identifies traits that run in families, another important step toward validating the DSM-5's dimensional model. Using a family study method, we investigated familial associations of PID-5 domain and facet scores in 195 families, examining associations between parents and offspring and across siblings. The Psychoticism, Antagonism, and Detachment domains showed significant familial aggregation, as did facets of Negative Affect and Disinhibition. Results are discussed in the context of personality pathology and family study methodology. The results also help validate the PID-5, given the familial nature of personality traits.
Growth of finiteness in the third year of life: replication and predictive validity.
Hadley, Pamela A; Rispoli, Matthew; Holt, Janet K; Fitzgerald, Colleen; Bahnsen, Alison
2014-06-01
The authors of this study investigated the validity of tense and agreement productivity (TAP) scoring in diverse sentence frames obtained during conversational language sampling as an alternative measure of finiteness for use with young children. Longitudinal language samples were used to model TAP growth from 21 to 30 months of age for 37 typically developing toddlers. Empirical Bayes (EB) linear and quadratic growth coefficients and child sex were then used to predict elicited grammar composite scores on the Test of Early Grammatical Impairment (TEGI; Rice & Wexler, 2001) at 36 months. A random-effects quadratic model with no intercept best characterized TAP growth, replicating the findings of Rispoli, Hadley, and Holt (2009). The combined regression model was significant, with the 3 variables accounting for 55.5% of the variance in the TEGI composite scores. These findings establish TAP growth as a valid metric of finiteness in the 3rd year of life. Developmental and theoretical implications are discussed.
Prototypic automated continuous recreational water quality monitoring of nine Chicago beaches
Dawn Shively,; Nevers, Meredith; Cathy Breitenbach,; Phanikumar, Mantha S.; Kasia Przybyla-Kelly,; Ashley M. Spoljaric,; Richard L. Whitman,
2015-01-01
Predictive empirical modeling is used in many locations worldwide as a rapid, alternative recreational water quality management tool to eliminate delayed notifications associated with traditional fecal indicator bacteria (FIB) culturing (referred to as the persistence model, PM) and to prevent errors in releasing swimming advisories. The goal of this study was to develop a fully automated water quality management system for multiple beaches using predictive empirical models (EM) and state-of-the-art technology. Many recent EMs rely on samples or data collected manually, which adds to analysis time and increases the burden to the beach manager. In this study, data from water quality buoys and weather stations were transmitted through cellular telemetry to a web hosting service. An executable program simultaneously retrieved and aggregated data for regression equations and calculated EM results each morning at 9:30 AM; results were transferred through RSS feed to a website, mapped to each beach, and received by the lifeguards to be posted at the beach. Models were initially developed for five beaches, but by the third year, 21 beaches were managed using refined and validated modeling systems. The adjusted R2 of the regressions relating Escherichia coli to hydrometeorological variables for the EMs were greater than those for the PMs, and ranged from 0.220 to 0.390 (2011) and 0.103 to 0.381 (2012). Validation results in 2013 revealed reduced predictive capabilities; however, three of the originally modeled beaches showed improvement in 2013 compared to 2012. The EMs generally showed higher accuracy and specificity than those of the PMs, and sensitivity was low for both approaches. In 2012 EM accuracy was 70–97%; specificity, 71–100%; and sensitivity, 0–64% and in 2013 accuracy was 68–97%; specificity, 73–100%; and sensitivity 0–36%. Factors that may have affected model capabilities include instrument malfunction, non-point source inputs, and sparse calibration data. The modeling system developed is the most extensive, fully-automated system for recreational water quality developed to date. Key insights for refining and improving large-scale empirical models for beach management have been developed through this multi-year effort.
Fischer, Katharina E
2012-08-02
Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. After modification by dropping two indicators that showed poor measures in the measurement models' quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of 'transparency', 'participation', 'scientific rigour' and 'reasonableness'. The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there are links between several non-observable constructs. PLS-PM was compatible in accounting for the complexity of coverage decisions to obtain a more realistic perspective for empirical analysis. The model specification can be used for hypothesis testing by using larger sample sizes and for data in the full domain of health technologies.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
Kuehnapfel, Andreas; Ahnert, Peter; Loeffler, Markus; Scholz, Markus
2017-02-01
Body surface area is a physiological quantity relevant for many medical applications. In clinical practice, it is determined by empirical formulae. 3D laser-based anthropometry provides an easy and effective way to measure body surface area but is not ubiquitously available. We used data from laser-based anthropometry from a population-based study to assess validity of published and commonly used empirical formulae. We performed a large population-based study on adults collecting classical anthropometric measurements and 3D body surface assessments (N = 1435). We determined reliability of the 3D body surface assessment and validity of 18 different empirical formulae proposed in the literature. The performance of these formulae is studied in subsets of sex and BMI. Finally, improvements of parameter settings of formulae and adjustments for sex and BMI were considered. 3D body surface measurements show excellent intra- and inter-rater reliability of 0.998 (overall concordance correlation coefficient, OCCC was used as measure of agreement). Empirical formulae of Fujimoto and Watanabe, Shuter and Aslani and Sendroy and Cecchini performed best with excellent concordance with OCCC > 0.949 even in subgroups of sex and BMI. Re-parametrization of formulae and adjustment for sex and BMI slightly improved results. In adults, 3D laser-based body surface assessment is a reliable alternative to estimation by empirical formulae. However, there are empirical formulae showing excellent results even in subgroups of sex and BMI with only little room for improvement.
NASA Astrophysics Data System (ADS)
Dalguer, Luis A.; Fukushima, Yoshimitsu; Irikura, Kojiro; Wu, Changjiang
2017-09-01
Inspired by the first workshop on Best Practices in Physics-Based Fault Rupture Models for Seismic Hazard Assessment of Nuclear Installations (BestPSHANI) conducted by the International Atomic Energy Agency (IAEA) on 18-20 November, 2015 in Vienna (http://www-pub.iaea.org/iaeameetings/50896/BestPSHANI), this PAGEOPH topical volume collects several extended articles from this workshop as well as several new contributions. A total of 17 papers have been selected on topics ranging from the seismological aspects of earthquake cycle simulations for source-scaling evaluation, seismic source characterization, source inversion and ground motion modeling (based on finite fault rupture using dynamic, kinematic, stochastic and empirical Green's functions approaches) to the engineering application of simulated ground motion for the analysis of seismic response of structures. These contributions include applications to real earthquakes and description of current practice to assess seismic hazard in terms of nuclear safety in low seismicity areas, as well as proposals for physics-based hazard assessment for critical structures near large earthquakes. Collectively, the papers of this volume highlight the usefulness of physics-based models to evaluate and understand the physical causes of observed and empirical data, as well as to predict ground motion beyond the range of recorded data. Relevant importance is given on the validation and verification of the models by comparing synthetic results with observed data and empirical models.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dearing, J F; Nelson, W R; Rose, S D
Computational thermal-hydraulic models of a 19-pin, electrically heated, wire-wrap liquid-metal fast breeder reactor test bundle were developed using two well-known subchannel analysis codes, COBRA III-C and SABRE-1 (wire-wrap version). These two codes use similar subchannel control volumes for the finite difference conservation equations but vary markedly in solution strategy and modeling capability. In particular, the empirical wire-wrap-forced diversion crossflow models are different. Surprisingly, however, crossflow velocity predictions of the two codes are very similar. Both codes show generally good agreement with experimental temperature data from a test in which a large radial temperature gradient was imposed. Differences between data andmore » code results are probably caused by experimental pin bowing, which is presently the limiting factor in validating coded empirical models.« less
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).
Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young
2016-04-01
Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.
Does the U.S. exercise contagion on Italy? A theoretical model and empirical evidence
NASA Astrophysics Data System (ADS)
Cerqueti, Roy; Fenga, Livio; Ventura, Marco
2018-06-01
This paper deals with the theme of contagion in financial markets. At this aim, we develop a model based on Mixed Poisson Processes to describe the abnormal returns of financial markets of two considered countries. In so doing, the article defines the theoretical conditions to be satisfied in order to state that one of them - the so-called leader - exercises contagion on the others - the followers. Specifically, we employ an invariant probabilistic result stating that a suitable transformation of a Mixed Poisson Process is still a Mixed Poisson Process. The theoretical claim is validated by implementing an extensive simulation analysis grounded on empirical data. The countries considered are the U.S. (as the leader) and Italy (as the follower) and the period under scrutiny is very large, ranging from 1970 to 2014.
ERIC Educational Resources Information Center
Adeyemo, Emily Oluseyi
2012-01-01
This study examined the impact of publication bias on a meta-analysis of empirical studies on validity of University Matriculation Examinations in Nigeria with a view to determine the level of difference between published and unpublished articles. Specifically, the design was an ex-post facto, a causal comparative design. The sample size consisted…
Developing Cognitive Models for Social Simulation from Survey Data
NASA Astrophysics Data System (ADS)
Alt, Jonathan K.; Lieberman, Stephen
The representation of human behavior and cognition continues to challenge the modeling and simulation community. The use of survey and polling instruments to inform belief states, issue stances and action choice models provides a compelling means of developing models and simulations with empirical data. Using these types of data to population social simulations can greatly enhance the feasibility of validation efforts, the reusability of social and behavioral modeling frameworks, and the testable reliability of simulations. We provide a case study demonstrating these effects, document the use of survey data to develop cognitive models, and suggest future paths forward for social and behavioral modeling.
A semi-mechanistic model of dead fine fuel moisture for Temperate and Mediterranean ecosystems
NASA Astrophysics Data System (ADS)
Resco de Dios, Víctor; Fellows, Aaron; Boer, Matthias; Bradstock, Ross; Nolan, Rachel; Goulden, Michel
2014-05-01
Fire is a major disturbance in terrestrial ecosystems globally. It has an enormous economic and social cost, and leads to fatalities in the worst cases. The moisture content of the vegetation (fuel moisture) is one of the main determinants of fire risk. Predicting the moisture content of dead and fine fuel (< 2.5 cm in diameter) is particularly important, as this is often the most important component of the fuel complex for fire propagation. A variety of drought indices, empirical and mechanistic models have been proposed to model fuel moisture. A commonality across these different approaches is that they have been neither validated across large temporal datasets nor validated across broadly different vegetation types. Here, we present the results of a study performed at 6 locations in California, USA (5 sites) and New South Wales, Australia (1 site), where 10-hours fuel moisture content was continuously measured every 30 minutes during one full year at each site. We observed that drought indices did not accurately predict fuel moisture, and that empirical and mechanistic models both needed site-specific calibrations, which hinders their global application as indices of fuel moisture. We developed a novel, single equation and semi-mechanistic model, based on atmospheric vapor-pressure deficit. Across sites and years, mean absolute error (MAE) of predicted fuel moisture was 4.7%. MAE dropped <1% in the critical range of fuel moisture <10%. The high simplicity, accuracy and precision of our model makes it suitable for a wide range of applications: from operational purposes, to global vegetation models.
Familiarizing Students with the Empirically Supported Treatment Approaches for Childhood Problems.
ERIC Educational Resources Information Center
Wilkins, Victoria; Chambliss, Catherine
The clinical research literature exploring the efficacy of particular treatment approaches is reviewed with the intent to facilitate the training of counseling students. Empirically supported treatments (ESTs) is defined operationally as evidence-based treatments following the listing of empirically validated psychological treatments reported by…
2012-01-01
Background Decision-making in healthcare is complex. Research on coverage decision-making has focused on comparative studies for several countries, statistical analyses for single decision-makers, the decision outcome and appraisal criteria. Accounting for decision processes extends the complexity, as they are multidimensional and process elements need to be regarded as latent constructs (composites) that are not observed directly. The objective of this study was to present a practical application of partial least square path modelling (PLS-PM) to evaluate how it offers a method for empirical analysis of decision-making in healthcare. Methods Empirical approaches that applied PLS-PM to decision-making in healthcare were identified through a systematic literature search. PLS-PM was used as an estimation technique for a structural equation model that specified hypotheses between the components of decision processes and the reasonableness of decision-making in terms of medical, economic and other ethical criteria. The model was estimated for a sample of 55 coverage decisions on the extension of newborn screening programmes in Europe. Results were evaluated by standard reliability and validity measures for PLS-PM. Results After modification by dropping two indicators that showed poor measures in the measurement models’ quality assessment and were not meaningful for newborn screening, the structural equation model estimation produced plausible results. The presence of three influences was supported: the links between both stakeholder participation or transparency and the reasonableness of decision-making; and the effect of transparency on the degree of scientific rigour of assessment. Reliable and valid measurement models were obtained to describe the composites of ‘transparency’, ‘participation’, ‘scientific rigour’ and ‘reasonableness’. Conclusions The structural equation model was among the first applications of PLS-PM to coverage decision-making. It allowed testing of hypotheses in situations where there are links between several non-observable constructs. PLS-PM was compatible in accounting for the complexity of coverage decisions to obtain a more realistic perspective for empirical analysis. The model specification can be used for hypothesis testing by using larger sample sizes and for data in the full domain of health technologies. PMID:22856325
Bryant, Fred B
2016-12-01
This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.
Carmona-Bayonas, A; Jiménez-Fonseca, P; Virizuela Echaburu, J; Sánchez Cánovas, M; Ayala de la Peña, F
2017-09-01
Since its publication more than 15 years ago, the MASCC score has been internationally validated any number of times and recommended by most clinical practice guidelines for the management of febrile neutropenia (FN) around the world. We have used an empirical data-supported simulated scenario to demonstrate that, despite everything, the MASCC score is impractical as a basis for decision-making. A detailed analysis of reasons supporting the clinical irrelevance of this model is performed. First, seven of its eight variables are "innocent bystanders" that contribute little to selecting low-risk candidates for ambulatory management. Secondly, the training series was hardly representative of outpatients with solid tumors and low-risk FN. Finally, the simultaneous inclusion of key variables both in the model and in the outcome explains its successful validation in various series of patients. Alternative methods of prognostic classification, such as the Clinical Index of Stable Febrile Neutropenia, have been specifically validated for patients with solid tumors and should replace the MASCC model in situations of clinical uncertainty.
Lessons learned from recent geomagnetic disturbance model validation activities
NASA Astrophysics Data System (ADS)
Pulkkinen, A. A.; Welling, D. T.
2017-12-01
Due to concerns pertaining to geomagnetically induced current impact on ground-based infrastructure, there has been significantly elevated interest in applying models for local geomagnetic disturbance or "delta-B" predictions. Correspondingly there has been elevated need for testing the quality of the delta-B predictions generated by the modern empirical and physics-based models. To address this need, community-wide activities were launched under the GEM Challenge framework and one culmination of the activities was the validation and selection of models that were transitioned into operations at NOAA SWPC. The community-wide delta-B action is continued under the CCMC-facilitated International Forum for Space Weather Capabilities Assessment and its "Ground Magnetic Perturbations: dBdt, delta-B, GICs, FACs" working group. The new delta-B working group builds on the past experiences and expands the collaborations to cover the entire international space weather community. In this paper, we discuss the key lessons learned from the past delta-B validation exercises and lay out the path forward for building on those experience under the new delta-B working group.
NASA Technical Reports Server (NTRS)
Hallock, Ashley K.; Polzin, Kurt A.
2011-01-01
A two-dimensional semi-empirical model of pulsed inductive thrust efficiency is developed to predict the effect of such a geometry on thrust efficiency. The model includes electromagnetic and gas-dynamic forces but excludes energy conversion from radial motion to axial motion, with the intention of characterizing thrust efficiency loss mechanisms that result from a conical versus a at inductive coil geometry. The range of conical pulsed inductive thruster geometries to which this model can be applied is explored with the use of finite element analysis. A semi-empirical relation for inductance as a function of current sheet radial and axial position is the limiting feature of the model, restricting the applicability as a function of half cone angle to a range from ten degrees to about 60 degrees. The model is nondimensionalized, yielding a set of dimensionless performance scaling parameters. Results of the model indicate that radial current sheet motion changes the axial dynamic impedance parameter at which thrust efficiency is maximized. This shift indicates that when radial current sheet motion is permitted in the model longer characteristic circuit timescales are more efficient, which can be attributed to a lower current sheet axial velocity as the plasma more rapidly decouples from the coil through radial motion. Thrust efficiency is shown to increase monotonically for decreasing values of the radial dynamic impedance parameter. This trend indicates that to maximize the radial decoupling timescale should be long compared to the characteristic circuit timescale.
Understanding human dynamics in microblog posting activities
NASA Astrophysics Data System (ADS)
Jiang, Zhihong; Zhang, Yubao; Wang, Hui; Li, Pei
2013-02-01
Human activity patterns are an important issue in behavior dynamics research. Empirical evidence indicates that human activity patterns can be characterized by a heavy-tailed inter-event time distribution. However, most researchers give an understanding by only modeling the power-law feature of the inter-event time distribution, and those overlooked non-power-law features are likely to be nontrivial. In this work, we propose a behavior dynamics model, called the finite memory model, in which humans adaptively change their activity rates based on a finite memory of recent activities, which is driven by inherent individual interest. Theoretical analysis shows a finite memory model can properly explain various heavy-tailed inter-event time distributions, including a regular power law and some non-power-law deviations. To validate the model, we carry out an empirical study based on microblogging activity from thousands of microbloggers in the Celebrity Hall of the Sina microblog. The results show further that the model is reasonably effective. We conclude that finite memory is an effective dynamics element to describe the heavy-tailed human activity pattern.
The dual process model of coping with bereavement: rationale and description.
Stroebe, M; Schut, H
1999-01-01
There are shortcomings in traditional theorizing about effective ways of coping with bereavement, most notably, with respect to the so-called "grief work hypothesis." Criticisms include imprecise definition, failure to represent dynamic processing that is characteristic of grieving, lack of empirical evidence and validation across cultures and historical periods, and a limited focus on intrapersonal processes and on health outcomes. Therefore, a revised model of coping with bereavement, the dual process model, is proposed. This model identifies two types of stressors, loss- and restoration-oriented, and a dynamic, regulatory coping process of oscillation, whereby the grieving individual at times confronts, at other times avoids, the different tasks of grieving. This model proposes that adaptive coping is composed of confrontation--avoidance of loss and restoration stressors. It also argues the need for dosage of grieving, that is, the need to take respite from dealing with either of these stressors, as an integral part of adaptive coping. Empirical research to support this conceptualization is discussed, and the model's relevance to the examination of complicated grief, analysis of subgroup phenomena, as well as interpersonal coping processes, is described.
Empirical conversion of the vertical profile of reflectivity from Ku-band to S-band frequency
NASA Astrophysics Data System (ADS)
Cao, Qing; Hong, Yang; Qi, Youcun; Wen, Yixin; Zhang, Jian; Gourley, Jonathan J.; Liao, Liang
2013-02-01
ABSTRACT This paper presents an empirical method for converting reflectivity from Ku-band (13.8 GHz) to S-band (2.8 GHz) for several hydrometeor species, which facilitates the incorporation of Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar (PR) measurements into quantitative precipitation estimation (QPE) products from the U.S. Next-Generation Radar (NEXRAD). The development of empirical dual-frequency relations is based on theoretical simulations, which have assumed appropriate scattering and microphysical models for liquid and solid hydrometeors (raindrops, snow, and ice/hail). Particle phase, shape, orientation, and density (especially for snow particles) have been considered in applying the T-matrix method to compute the scattering amplitudes. Gamma particle size distribution (PSD) is utilized to model the microphysical properties in the ice region, melting layer, and raining region of precipitating clouds. The variability of PSD parameters is considered to study the characteristics of dual-frequency reflectivity, especially the variations in radar dual-frequency ratio (DFR). The empirical relations between DFR and Ku-band reflectivity have been derived for particles in different regions within the vertical structure of precipitating clouds. The reflectivity conversion using the proposed empirical relations has been tested using real data collected by TRMM-PR and a prototype polarimetric WSR-88D (Weather Surveillance Radar 88 Doppler) radar, KOUN. The processing and analysis of collocated data demonstrate the validity of the proposed empirical relations and substantiate their practical significance for reflectivity conversion, which is essential to the TRMM-based vertical profile of reflectivity correction approach in improving NEXRAD-based QPE.
Nelson, Jon P
2010-03-01
This paper assesses the methodology employed in longitudinal studies of advertising and youth drinking and smoking behaviors. These studies often are given a causal interpretation in the psychology and public health literatures. Four issues are examined from the perspective of econometrics. First, specification and validation of empirical models. Second, empirical issues associated with measures of advertising receptivity and exposure. Third, potential endogeneity of receptivity and exposure variables. Fourth, sample selection bias in baseline and follow-up surveys. Longitudinal studies reviewed include 20 studies of youth drinking and 26 studies of youth smoking. Substantial shortcomings are found in the studies, which preclude a causal interpretation.
Nelson, Jon P
2010-01-01
This paper assesses the methodology employed in longitudinal studies of advertising and youth drinking and smoking behaviors. These studies often are given a causal interpretation in the psychology and public health literatures. Four issues are examined from the perspective of econometrics. First, specification and validation of empirical models. Second, empirical issues associated with measures of advertising receptivity and exposure. Third, potential endogeneity of receptivity and exposure variables. Fourth, sample selection bias in baseline and follow-up surveys. Longitudinal studies reviewed include 20 studies of youth drinking and 26 studies of youth smoking. Substantial shortcomings are found in the studies, which preclude a causal interpretation. PMID:20617009
Comparison and validation of acoustic response models for wind noise reduction pipe arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marty, Julien; Denis, Stéphane; Gabrielson, Thomas
The detection capability of the infrasound component of the International Monitoring System (IMS) is tightly linked to the performance of its wind noise reduction systems. The wind noise reduction solution implemented at all IMS infrasound measurement systems consists of a spatial distribution of air inlets connected to the infrasound sensor through a network of pipes. This system, usually referred to as “pipe array,” has proven its efficiency in operational conditions. The objective of this paper is to present the results of the comparison and validation of three distinct acoustic response models for pipe arrays. The characteristics of the models andmore » the results obtained for a defined set of pipe array configurations are described. A field experiment using a newly developed infrasound generator, dedicated to the validation of these models, is then presented. The comparison between the modeled and empirical acoustic responses shows that two of the three models can be confidently used to estimate pipe array acoustic responses. Lastly, this study paves the way to the deconvolution of IMS infrasound data from pipe array responses and to the optimization of pipe array design to IMS applications.« less
Comparison and validation of acoustic response models for wind noise reduction pipe arrays
Marty, Julien; Denis, Stéphane; Gabrielson, Thomas; ...
2017-02-13
The detection capability of the infrasound component of the International Monitoring System (IMS) is tightly linked to the performance of its wind noise reduction systems. The wind noise reduction solution implemented at all IMS infrasound measurement systems consists of a spatial distribution of air inlets connected to the infrasound sensor through a network of pipes. This system, usually referred to as “pipe array,” has proven its efficiency in operational conditions. The objective of this paper is to present the results of the comparison and validation of three distinct acoustic response models for pipe arrays. The characteristics of the models andmore » the results obtained for a defined set of pipe array configurations are described. A field experiment using a newly developed infrasound generator, dedicated to the validation of these models, is then presented. The comparison between the modeled and empirical acoustic responses shows that two of the three models can be confidently used to estimate pipe array acoustic responses. Lastly, this study paves the way to the deconvolution of IMS infrasound data from pipe array responses and to the optimization of pipe array design to IMS applications.« less
NASA Astrophysics Data System (ADS)
Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika
2018-05-01
Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.
NASA Astrophysics Data System (ADS)
Wickramasinghe, Vathsala; Gunawardena, Vathsala
2010-08-01
Extant literature suggests people-centred factors as one of the major areas influencing enterprise resource planning (ERP) implementation project success. Yet, to date, few empirical studies attempted to validate the link between people-centred factors and ERP implementation project success. The purpose of this study is to empirically identify people-centred factors that are critical to ERP implementation projects in Sri Lanka. The study develops and empirically validates a framework for people-centred factors that influence the success of ERP implementation projects. Survey research methodology was used and collected data from 74 ERP implementation projects in Sri Lanka. The people-centred factors of 'project team competence', 'rewards' and 'communication and change' were found to predict significantly the ERP implementation project success.
ERIC Educational Resources Information Center
Calvery, Suzannah Vallejo
2013-01-01
Mentoring research to date focuses on outcomes related to program goals and theoretical background, and almost all of these relate to the experience of the mentee. Very little research has been completed on the other side of the dyad--the mentor--despite the fact that mentor expectations and experience contribute significantly to the perceived…
Reconceptualising the external validity of discrete choice experiments.
Lancsar, Emily; Swait, Joffre
2014-10-01
External validity is a crucial but under-researched topic when considering using discrete choice experiment (DCE) results to inform decision making in clinical, commercial or policy contexts. We present the theory and tests traditionally used to explore external validity that focus on a comparison of final outcomes and review how this traditional definition has been empirically tested in health economics and other sectors (such as transport, environment and marketing) in which DCE methods are applied. While an important component, we argue that the investigation of external validity should be much broader than a comparison of final outcomes. In doing so, we introduce a new and more comprehensive conceptualisation of external validity, closely linked to process validity, that moves us from the simple characterisation of a model as being or not being externally valid on the basis of predictive performance, to the concept that external validity should be an objective pursued from the initial conceptualisation and design of any DCE. We discuss how such a broader definition of external validity can be fruitfully used and suggest innovative ways in which it can be explored in practice.
Zhao, Lue Ping; Carlsson, Annelie; Larsson, Helena Elding; Forsander, Gun; Ivarsson, Sten A; Kockum, Ingrid; Ludvigsson, Johnny; Marcus, Claude; Persson, Martina; Samuelsson, Ulf; Örtqvist, Eva; Pyo, Chul-Woo; Bolouri, Hamid; Zhao, Michael; Nelson, Wyatt C; Geraghty, Daniel E; Lernmark, Åke
2017-11-01
It is of interest to predict possible lifetime risk of type 1 diabetes (T1D) in young children for recruiting high-risk subjects into longitudinal studies of effective prevention strategies. Utilizing a case-control study in Sweden, we applied a recently developed next generation targeted sequencing technology to genotype class II genes and applied an object-oriented regression to build and validate a prediction model for T1D. In the training set, estimated risk scores were significantly different between patients and controls (P = 8.12 × 10 -92 ), and the area under the curve (AUC) from the receiver operating characteristic (ROC) analysis was 0.917. Using the validation data set, we validated the result with AUC of 0.886. Combining both training and validation data resulted in a predictive model with AUC of 0.903. Further, we performed a "biological validation" by correlating risk scores with 6 islet autoantibodies, and found that the risk score was significantly correlated with IA-2A (Z-score = 3.628, P < 0.001). When applying this prediction model to the Swedish population, where the lifetime T1D risk ranges from 0.5% to 2%, we anticipate identifying approximately 20 000 high-risk subjects after testing all newborns, and this calculation would identify approximately 80% of all patients expected to develop T1D in their lifetime. Through both empirical and biological validation, we have established a prediction model for estimating lifetime T1D risk, using class II HLA. This prediction model should prove useful for future investigations to identify high-risk subjects for prevention research in high-risk populations. Copyright © 2017 John Wiley & Sons, Ltd.
Dierssen, Heidi M
2010-10-05
Phytoplankton biomass and productivity have been continuously monitored from ocean color satellites for over a decade. Yet, the most widely used empirical approach for estimating chlorophyll a (Chl) from satellites can be in error by a factor of 5 or more. Such variability is due to differences in absorption and backscattering properties of phytoplankton and related concentrations of colored-dissolved organic matter (CDOM) and minerals. The empirical algorithms have built-in assumptions that follow the basic precept of biological oceanography--namely, oligotrophic regions with low phytoplankton biomass are populated with small phytoplankton, whereas more productive regions contain larger bloom-forming phytoplankton. With a changing world ocean, phytoplankton composition may shift in response to altered environmental forcing, and CDOM and mineral concentrations may become uncoupled from phytoplankton stocks, creating further uncertainty and error in the empirical approaches. Hence, caution is warranted when using empirically derived Chl to infer climate-related changes in ocean biology. The Southern Ocean is already experiencing climatic shifts and shows substantial errors in satellite-derived Chl for different phytoplankton assemblages. Accurate global assessments of phytoplankton will require improved technology and modeling, enhanced field observations, and ongoing validation of our "eyes in space."
NASA Astrophysics Data System (ADS)
Stephens, G. K.; Sitnov, M. I.; Ukhorskiy, A. Y.; Vandegriff, J. D.; Tsyganenko, N. A.
2010-12-01
The dramatic increase of the geomagnetic field data volume available due to many recent missions, including GOES, Polar, Geotail, Cluster, and THEMIS, required at some point the appropriate qualitative transition in the empirical modeling tools. Classical empirical models, such as T96 and T02, used few custom-tailored modules to represent major magnetospheric current systems and simple data binning or loading-unloading inputs for their fitting with data and the subsequent applications. They have been replaced by more systematic expansions of the equatorial and field-aligned current contributions as well as by the advanced data-mining algorithms searching for events with the global activity parameters, such as the Sym-H index, similar to those at the time of interest, as is done in the model TS07D (Tsyganenko and Sitnov, 2007; Sitnov et al., 2008). The necessity to mine and fit data dynamically, with the individual subset of the database being used to reproduce the geomagnetic field pattern at every new moment in time, requires the corresponding transition in the use of the new empirical geomagnetic field models. It becomes more similar to runs-on-request offered by the Community Coordinated Modeling Center for many first principles MHD and kinetic codes. To provide this mode of operation for the TS07D model a new web-based modeling tool has been created and tested at the JHU/APL (http://geomag_field.jhuapl.edu/model/), and we discuss the first results of its performance testing and validation, including in-sample and out-of-sample modeling of a number of CME- and CIR-driven magnetic storms. We also report on the first tests of the forecasting version of the TS07D model, where the magnetospheric part of the macro-parameters involved in the data-binning process (Sym-H index and its trend parameter) are replaced by their solar wind-based analogs obtained using the Burton-McPherron-Russell approach.
ERIC Educational Resources Information Center
Grimes, Ka Rene
2014-01-01
The purpose of this paper is to provide a narrative of work in progress to validate a math app designed for number sense. To date I have conducted classroom research and pilot studies across ten early childhood classrooms in two schools and will begin an empirical study at the beginning of the 2014-2015 school year. Through my work I believe the…
Absolute Calibration of Optical Satellite Sensors Using Libya 4 Pseudo Invariant Calibration Site
NASA Technical Reports Server (NTRS)
Mishra, Nischal; Helder, Dennis; Angal, Amit; Choi, Jason; Xiong, Xiaoxiong
2014-01-01
The objective of this paper is to report the improvements in an empirical absolute calibration model developed at South Dakota State University using Libya 4 (+28.55 deg, +23.39 deg) pseudo invariant calibration site (PICS). The approach was based on use of the Terra MODIS as the radiometer to develop an absolute calibration model for the spectral channels covered by this instrument from visible to shortwave infrared. Earth Observing One (EO-1) Hyperion, with a spectral resolution of 10 nm, was used to extend the model to cover visible and near-infrared regions. A simple Bidirectional Reflectance Distribution function (BRDF) model was generated using Terra Moderate Resolution Imaging Spectroradiometer (MODIS) observations over Libya 4 and the resulting model was validated with nadir data acquired from satellite sensors such as Aqua MODIS and Landsat 7 (L7) Enhanced Thematic Mapper (ETM+). The improvements in the absolute calibration model to account for the BRDF due to off-nadir measurements and annual variations in the atmosphere are summarized. BRDF models due to off-nadir viewing angles have been derived using the measurements from EO-1 Hyperion. In addition to L7 ETM+, measurements from other sensors such as Aqua MODIS, UK-2 Disaster Monitoring Constellation (DMC), ENVISAT Medium Resolution Imaging Spectrometer (MERIS) and Operational Land Imager (OLI) onboard Landsat 8 (L8), which was launched in February 2013, were employed to validate the model. These satellite sensors differ in terms of the width of their spectral bandpasses, overpass time, off-nadir-viewing capabilities, spatial resolution and temporal revisit time, etc. The results demonstrate that the proposed empirical calibration model has accuracy of the order of 3% with an uncertainty of about 2% for the sensors used in the study.
Model confirmation in climate economics
Millner, Antony; McDermott, Thomas K. J.
2016-01-01
Benefit–cost integrated assessment models (BC-IAMs) inform climate policy debates by quantifying the trade-offs between alternative greenhouse gas abatement options. They achieve this by coupling simplified models of the climate system to models of the global economy and the costs and benefits of climate policy. Although these models have provided valuable qualitative insights into the sensitivity of policy trade-offs to different ethical and empirical assumptions, they are increasingly being used to inform the selection of policies in the real world. To the extent that BC-IAMs are used as inputs to policy selection, our confidence in their quantitative outputs must depend on the empirical validity of their modeling assumptions. We have a degree of confidence in climate models both because they have been tested on historical data in hindcasting experiments and because the physical principles they are based on have been empirically confirmed in closely related applications. By contrast, the economic components of BC-IAMs often rely on untestable scenarios, or on structural models that are comparatively untested on relevant time scales. Where possible, an approach to model confirmation similar to that used in climate science could help to build confidence in the economic components of BC-IAMs, or focus attention on which components might need refinement for policy applications. We illustrate the potential benefits of model confirmation exercises by performing a long-run hindcasting experiment with one of the leading BC-IAMs. We show that its model of long-run economic growth—one of its most important economic components—had questionable predictive power over the 20th century. PMID:27432964
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habte, A.; Sengupta, M.; Wilcox, S.
Models to compute Global Horizontal Irradiance (GHI) and Direct Normal Irradiance (DNI) have been in development over the last 3 decades. These models can be classified as empirical or physical, based on the approach. Empirical models relate ground based observations with satellite measurements and use these relations to compute surface radiation. Physical models consider the radiation received from the earth at the satellite and create retrievals to estimate surface radiation. While empirical methods have been traditionally used for computing surface radiation for the solar energy industry the advent of faster computing has made operational physical models viable. The Global Solarmore » Insolation Project (GSIP) is an operational physical model from NOAA that computes GHI using the visible and infrared channel measurements from the GOES satellites. GSIP uses a two-stage scheme that first retrieves cloud properties and uses those properties in a radiative transfer model to calculate surface radiation. NREL, University of Wisconsin and NOAA have recently collaborated to adapt GSIP to create a 4 km GHI and DNI product every 30 minutes. This paper presents an outline of the methodology and a comprehensive validation using high quality ground based solar data from the National Oceanic and Atmospheric Administration (NOAA) Surface Radiation (SURFRAD) (http://www.srrb.noaa.gov/surfrad/sitepage.html) and Integrated Surface Insolation Study (ISIS) http://www.srrb.noaa.gov/isis/isissites.html), the Solar Radiation Research Laboratory (SRRL) at National Renewable Energy Laboratory (NREL), and Sun Spot One (SS1) stations.« less
Smsynth: AN Imagery Synthesis System for Soil Moisture Retrieval
NASA Astrophysics Data System (ADS)
Cao, Y.; Xu, L.; Peng, J.
2018-04-01
Soil moisture (SM) is a important variable in various research areas, such as weather and climate forecasting, agriculture, drought and flood monitoring and prediction, and human health. An ongoing challenge in estimating SM via synthetic aperture radar (SAR) is the development of the retrieval SM methods, especially the empirical models needs as training samples a lot of measurements of SM and soil roughness parameters which are very difficult to acquire. As such, it is difficult to develop empirical models using realistic SAR imagery and it is necessary to develop methods to synthesis SAR imagery. To tackle this issue, a SAR imagery synthesis system based on the SM named SMSynth is presented, which can simulate radar signals that are realistic as far as possible to the real SAR imagery. In SMSynth, SAR backscatter coefficients for each soil type are simulated via the Oh model under the Bayesian framework, where the spatial correlation is modeled by the Markov random field (MRF) model. The backscattering coefficients simulated based on the designed soil parameters and sensor parameters are added into the Bayesian framework through the data likelihood where the soil parameters and sensor parameters are set as realistic as possible to the circumstances on the ground and in the validity range of the Oh model. In this way, a complete and coherent Bayesian probabilistic framework is established. Experimental results show that SMSynth is capable of generating realistic SAR images that suit the needs of a large amount of training samples of empirical models.
Modelling seagrass growth and development to evaluate transplanting strategies for restoration
Renton, Michael; Airey, Michael; Cambridge, Marion L.; Kendrick, Gary A.
2011-01-01
Background and Aims Seagrasses are important marine plants that are under threat globally. Restoration by transplanting vegetative fragments or seedlings into areas where seagrasses have been lost is possible, but long-term trial data are limited. The goal of this study is to use available short-term data to predict long-term outcomes of transplanting seagrass. Methods A functional–structural plant model of seagrass growth that integrates data collected from short-term trials and experiments is presented. The model was parameterized for the species Posidonia australis, a limited validation of the model against independent data and a sensitivity analysis were conducted and the model was used to conduct a preliminary evaluation of different transplanting strategies. Key Results The limited validation was successful, and reasonable long-term outcomes could be predicted, based only on short-term data. Conclusions This approach for modelling seagrass growth and development enables long-term predictions of the outcomes to be made from different strategies for transplanting seagrass, even when empirical long-term data are difficult or impossible to collect. More validation is required to improve confidence in the model's predictions, and inclusion of more mechanism will extend the model's usefulness. Marine restoration represents a novel application of functional–structural plant modelling. PMID:21821624
Prognostics of Power Electronics, Methods and Validation Experiments
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Celaya, Jose R.; Biswas, Gautam; Goebel, Kai
2012-01-01
Abstract Failure of electronic devices is a concern for future electric aircrafts that will see an increase of electronics to drive and control safety-critical equipment throughout the aircraft. As a result, investigation of precursors to failure in electronics and prediction of remaining life of electronic components is of key importance. DC-DC power converters are power electronics systems employed typically as sourcing elements for avionics equipment. Current research efforts in prognostics for these power systems focuses on the identification of failure mechanisms and the development of accelerated aging methodologies and systems to accelerate the aging process of test devices, while continuously measuring key electrical and thermal parameters. Preliminary model-based prognostics algorithms have been developed making use of empirical degradation models and physics-inspired degradation model with focus on key components like electrolytic capacitors and power MOSFETs (metal-oxide-semiconductor-field-effect-transistor). This paper presents current results on the development of validation methods for prognostics algorithms of power electrolytic capacitors. Particularly, in the use of accelerated aging systems for algorithm validation. Validation of prognostics algorithms present difficulties in practice due to the lack of run-to-failure experiments in deployed systems. By using accelerated experiments, we circumvent this problem in order to define initial validation activities.
Big data prediction of durations for online collective actions based on peak's timing
NASA Astrophysics Data System (ADS)
Nie, Shizhao; Wang, Zheng; Pujia, Wangmo; Nie, Yuan; Lu, Peng
2018-02-01
Peak Model states that each collective action has a life circle, which contains four periods of "prepare", "outbreak", "peak", and "vanish"; and the peak determines the max energy and the whole process. The peak model's re-simulation indicates that there seems to be a stable ratio between the peak's timing (TP) and the total span (T) or duration of collective actions, which needs further validations through empirical data of collective actions. Therefore, the daily big data of online collective actions is applied to validate the model; and the key is to check the ratio between peak's timing and the total span. The big data is obtained from online data recording & mining of websites. It is verified by the empirical big data that there is a stable ratio between TP and T; furthermore, it seems to be normally distributed. This rule holds for both the general cases and the sub-types of collective actions. Given the distribution of the ratio, estimated probability density function can be obtained, and therefore the span can be predicted via the peak's timing. Under the scenario of big data, the instant span (how long the collective action lasts or when it ends) will be monitored and predicted in real-time. With denser data (Big Data), the estimation of the ratio's distribution gets more robust, and the prediction of collective actions' spans or durations will be more accurate.
NASA Astrophysics Data System (ADS)
Ramirez, N.; Afshari, Afshin; Norford, L.
2018-07-01
A steady-state Reynolds-averaged Navier-Stoke computational fluid dynamics (CFD) investigation of boundary-layer flow over a major portion of downtown Abu Dhabi is conducted. The results are used to derive the shear stress and characterize the logarithmic region for eight sub-domains, where the sub-domains overlap and are overlaid in the streamwise direction. They are characterized by a high frontal area index initially, which decreases significantly beyond the fifth sub-domain. The plan area index is relatively stable throughout the domain. For each sub-domain, the estimated local roughness length and displacement height derived from CFD results are compared to prevalent empirical formulations. We further validate and tune a mixing-length model proposed by Coceal and Belcher (Q J R Meteorol Soc 130:1349-1372, 2004). Finally, the in-canopy wind-speed attenuation is analysed as a function of fetch. It is shown that, while there is some room for improvement in Macdonald's empirical formulations (Boundary-Layer Meteorol 97:25-45, 2000), Coceal and Belcher's mixing model in combination with the resolution method of Di Sabatino et al. (Boundary-Layer Meteorol 127:131-151, 2008) can provide a robust estimation of the average wind speed in the logarithmic region. Within the roughness sublayer, a properly parametrized Cionco exponential model is shown to be quite accurate.
Ren, Wen-Long; Wen, Yang-Jun; Dunwell, Jim M; Zhang, Yuan-Ming
2018-03-01
Although nonparametric methods in genome-wide association studies (GWAS) are robust in quantitative trait nucleotide (QTN) detection, the absence of polygenic background control in single-marker association in genome-wide scans results in a high false positive rate. To overcome this issue, we proposed an integrated nonparametric method for multi-locus GWAS. First, a new model transformation was used to whiten the covariance matrix of polygenic matrix K and environmental noise. Using the transferred model, Kruskal-Wallis test along with least angle regression was then used to select all the markers that were potentially associated with the trait. Finally, all the selected markers were placed into multi-locus model, these effects were estimated by empirical Bayes, and all the nonzero effects were further identified by a likelihood ratio test for true QTN detection. This method, named pKWmEB, was validated by a series of Monte Carlo simulation studies. As a result, pKWmEB effectively controlled false positive rate, although a less stringent significance criterion was adopted. More importantly, pKWmEB retained the high power of Kruskal-Wallis test, and provided QTN effect estimates. To further validate pKWmEB, we re-analyzed four flowering time related traits in Arabidopsis thaliana, and detected some previously reported genes that were not identified by the other methods.
A Semiempirical Model for Sigma-Phase Precipitation in Duplex and Superduplex Stainless Steels
NASA Astrophysics Data System (ADS)
Ferro, P.; Bonollo, F.
2012-04-01
Sigma phase is known to reduce the mechanical properties and corrosion resistance of duplex and superduplex stainless steels. Therefore, heat treatments and welding must be carefully performed so as to avoid the appearance of such a detrimental phase, and clearly, models suitable to faithfully predict σ-phase precipitation are very useful tools. Most fully analytical models are based on thermodynamic calculations whose agreement with experimental results is not always good, so that such models should be used for qualitative purposes only. Alternatively, it is possible to exploit semiempirical models, where time-temperature-transformation (TTT) diagrams are empirically determined for a given alloy and the continuous-cooling-transformation (CCT) diagram is calculated from the TTT diagram. In this work, a semiempirical model for σ-phase precipitation in duplex and superduplex stainless steels, under both isothermal and unisothermal conditions, is proposed. Model parameters are calculated from empirical data and CCT diagrams are obtained by means of the additivity rule, whereas experimental measurements for model validation are taken from the literature. This model gives a satisfactory estimation of σ-phase precipitates during both isothermal aging and the continuous cooling process.
Non-Linear Slosh Damping Model Development and Validation
NASA Technical Reports Server (NTRS)
Yang, H. Q.; West, Jeff
2015-01-01
Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can lead to significant savings by reducing the number and size of slosh baffles in liquid propellant tanks.
Backward jump continuous-time random walk: An application to market trading
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Backward jump continuous-time random walk: an application to market trading.
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Bringing Science to Bear: An Empirical Assessment of the Comprehensive Soldier Fitness Program
ERIC Educational Resources Information Center
Lester, Paul B.; McBride, Sharon; Bliese, Paul D.; Adler, Amy B.
2011-01-01
This article outlines the U.S. Army's effort to empirically validate and assess the Comprehensive Soldier Fitness (CSF) program. The empirical assessment includes four major components. First, the CSF scientific staff is currently conducting a longitudinal study to determine if the Master Resilience Training program and the Comprehensive…
Shin, Dong-Hee; Kim, Won-Yong; Kim, Won-Young
2008-06-01
This study explores attitudinal and behavioral patterns when using Cyworld by adopting an expanded Technology Acceptance Model (TAM). A model for Cyworld acceptance is used to examine how various factors modified from the TAM influence acceptance and its antecedents. This model is examined through an empirical study involving Cyworld users using structural equation modeling techniques. The model shows reasonably good measurement properties and the constructs are validated. The results not only confirm the model but also reveal general factors applicable to Web2.0. A set of constructs in the model can be the Web2.0-specific factors, playing as enhancing factor to attitudes and intention.
Accelerated Aging in Electrolytic Capacitors for Prognostics
NASA Technical Reports Server (NTRS)
Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank
2012-01-01
The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
Panagiotopoulou, O.; Wilshin, S. D.; Rayfield, E. J.; Shefelbine, S. J.; Hutchinson, J. R.
2012-01-01
Finite element modelling is well entrenched in comparative vertebrate biomechanics as a tool to assess the mechanical design of skeletal structures and to better comprehend the complex interaction of their form–function relationships. But what makes a reliable subject-specific finite element model? To approach this question, we here present a set of convergence and sensitivity analyses and a validation study as an example, for finite element analysis (FEA) in general, of ways to ensure a reliable model. We detail how choices of element size, type and material properties in FEA influence the results of simulations. We also present an empirical model for estimating heterogeneous material properties throughout an elephant femur (but of broad applicability to FEA). We then use an ex vivo experimental validation test of a cadaveric femur to check our FEA results and find that the heterogeneous model matches the experimental results extremely well, and far better than the homogeneous model. We emphasize how considering heterogeneous material properties in FEA may be critical, so this should become standard practice in comparative FEA studies along with convergence analyses, consideration of element size, type and experimental validation. These steps may be required to obtain accurate models and derive reliable conclusions from them. PMID:21752810
NASA Astrophysics Data System (ADS)
Ruiz-Pérez, Guiomar; Koch, Julian; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix
2017-12-01
Ecohydrological modeling studies in developing countries, such as sub-Saharan Africa, often face the problem of extensive parametrical requirements and limited available data. Satellite remote sensing data may be able to fill this gap, but require novel methodologies to exploit their spatio-temporal information that could potentially be incorporated into model calibration and validation frameworks. The present study tackles this problem by suggesting an automatic calibration procedure, based on the empirical orthogonal function, for distributed ecohydrological daily models. The procedure is tested with the support of remote sensing data in a data-scarce environment - the upper Ewaso Ngiro river basin in Kenya. In the present application, the TETIS-VEG model is calibrated using only NDVI (Normalized Difference Vegetation Index) data derived from MODIS. The results demonstrate that (1) satellite data of vegetation dynamics can be used to calibrate and validate ecohydrological models in water-controlled and data-scarce regions, (2) the model calibrated using only satellite data is able to reproduce both the spatio-temporal vegetation dynamics and the observed discharge at the outlet and (3) the proposed automatic calibration methodology works satisfactorily and it allows for a straightforward incorporation of spatio-temporal data into the calibration and validation framework of a model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De la Luz, V.
2016-07-10
Observations of the emission at radio, millimeter, sub-millimeter, and infrared wavelengths in the center of the solar disk validate the autoconsistence of semi-empirical models of the chromosphere. Theoretically, these models must reproduce the emission at the solar limb. In this work, we tested both the VALC and C7 semi-empirical models by computing their emission spectrum in the frequency range from 2 GHz to 10 THz at solar limb altitudes. We calculate the Sun's theoretical radii as well as their limb brightening. Non-local thermodynamic equilibrium was computed for hydrogen, electron density, and H{sup −}. In order to solve the radiative transfermore » equation, a three-dimensional (3D) geometry was employed to determine the ray paths, and Bremsstrahlung, H{sup −}, and inverse Bremsstrahlung opacity sources were integrated in the optical depth. We compared the computed solar radii with high-resolution observations at the limb obtained by Clark. We found that there are differences between the observed and computed solar radii of 12,000 km at 20 GHz, 5000 km at 100 GHz, and 1000 km at 3 THz for both semi-empirical models. A difference of 8000 km in the solar radii was found when comparing our results against the heights obtained from H α observations of spicules-off at the solar limb. We conclude that the solar radii cannot be reproduced by VALC and C7 semi-empirical models at radio—infrared wavelengths. Therefore, the structures in the high chromosphere provide a better measurement of the solar radii and their limb brightening as shown in previous investigations.« less
Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming
2015-06-01
Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.
Reconstruction of stochastic temporal networks through diffusive arrival times
NASA Astrophysics Data System (ADS)
Li, Xun; Li, Xiang
2017-06-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications.
Drainage investment and wetland loss: an analysis of the national resources inventory data
Douglas, Aaron J.; Johnson, Richard L.
1994-01-01
The United States Soil Conservation Service (SCS) conducts a survey for the purpose of establishing an agricultural land use database. This survey is called the National Resources Inventory (NRI) database. The complex NRI land classification system, in conjunction with the quantitative information gathered by the survey, has numerous applications. The current paper uses the wetland area data gathered by the NRI in 1982 and 1987 to examine empirically the factors that generate wetland loss in the United States. The cross-section regression models listed here use the quantity of wetlands, the stock of drainage capital, the realty value of farmland and drainage costs to explain most of the cross-state variation in wetland loss rates. Wetlands preservation efforts by federal agencies assume that pecuniary economic factors play a decisive role in wetland drainage. The empirical models tested in the present paper validate this assumption.
Reconstruction of stochastic temporal networks through diffusive arrival times
Li, Xun; Li, Xiang
2017-01-01
Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687
A Study of the Antecedents and Consequences of Members' Helping Behaviors in Online Community
NASA Astrophysics Data System (ADS)
Chu, Kuo-Ming
Despite the growing popularity of online communities, there are a major gap between practitioners and academicians as to how to share information and knowledge among members of these groups. However, none of the previous studies have integrated these variables into a more comprehensive framework. Thus more validations are required the aim of this paper is to develop a theoretical model that enables us to examine the antecedents and consequences effects of members’ helping behavior in online communities. The moderating effects of the sense of community on the relationships between members’ helping behaviors on information sharing and knowledge contribution are also evaluated. A complete model is developed for empirical testing. Using Yahoo’s members as the samples of this study, the empirical results suggested that online communities members’ helping behavior represents a large pool of product know-how. They seem to be a promising source of innovation capabilities for new product development.
Computational Modeling of Aortic Valvular Stenosis to Asses the Range of Validity of Gorlin Equation
NASA Astrophysics Data System (ADS)
Okpara, Emanuel; Agarwal, Ramesh; Rifkin, Robert; Wendl, Mike
2003-11-01
It is well known from clinical observations that the underestimation errors occur with the use of Gorlin formula (1) for the calculation of valve area of the stenotic aortic valve in patients with low cardiac output, that is in low flow states. Since 1951, empirical modifications to Gorlin formula have been proposed in the literaure by many researchers. In this paper, we study the mild to severe aortic valve stenosis for low to high flow rates by employing a simplified model of aortic valve. The aortic valve stenosis is modeled by a circular orifice in a flat plate embedded in the cross-section of a rigid tube (aorta). Experimental results are available for this configuration for the validation of a CFD solver "FLUENT". The numerical data base generated for this model for various degrees of stenoses and flow rates is employed to asses the range of validity of Gorlin's equation. Modifications to Gorlin formula are suggested to make it valid for all flow rates to determine the valve area for clinical use. (1) R. Gorlin and S. Gorlin," Hydraulic Formula for Calculation of the Area of Stenotic Mitral Valve, Other Cardiac Valves and Central Circulatory Shunts," Am. Heart Journal, Vol. 41, 1951, pp. 1-29.
Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data
Hu, Jianhua; Wang, Peng; Qu, Annie
2014-01-01
Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433
Redefinition and global estimation of basal ecosystem respiration rate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Wenping; Luo, Yiqi; Li, Xianglan
2011-10-13
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located atmore » latitudes ranging from ~3°S to ~70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual« less
Wen, Kuang-Yi; Gustafson, David H; Hawkins, Robert P; Brennan, Patricia F; Dinauer, Susan; Johnson, Pauley R; Siegler, Tracy
2010-01-01
To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements. Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases. Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes. Two of the seven factors, 'organizational motivation' and 'meeting user needs,' were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome. The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study. The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term.
Cox, Brian J; Clara, Ian P; Worobec, Lydia M; Grant, Bridget F
2012-12-01
Individual personality disorders (PD) are grouped into three clusters in the DSM-IV (A, B, and C). There is very little empirical evidence available concerning the validity of this model in the general population. The current study included all 10 of the DSM-IV PD assessed in Wave 1 and Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). Confirmatory factor analysis was used to evaluate three plausible models of the structure of Axis II personality disorders (the current hierarchical DSM-IV three-factor model in which individual PD are believed to load on their assigned clusters, which in turn load onto a single Axis II factor; a general single-factor model; and three independent factors). Each of these models was tested in both the total and also separately for gender. The higher order DSM-IV model demonstrated good fit to the data on a number of goodness-of-fit indices. The results for this model were very similar across genders. A model of PD based on the current DSM-IV hierarchical conceptualization of a higher order classification scheme received strong empirical support through confirmatory factor analysis using a number of goodness-of-fit indices in a nationally representative sample. Other models involving broad, higher order personality domains such as neuroticism in relation to personality disorders have yet to be tested in epidemiologic surveys and represent an important avenue for future research.
Generalized Beer-Lambert model for near-infrared light propagation in thick biological tissues
NASA Astrophysics Data System (ADS)
Bhatt, Manish; Ayyalasomayajula, Kalyan R.; Yalavarthy, Phaneendra K.
2016-07-01
The attenuation of near-infrared (NIR) light intensity as it propagates in a turbid medium like biological tissue is described by modified the Beer-Lambert law (MBLL). The MBLL is generally used to quantify the changes in tissue chromophore concentrations for NIR spectroscopic data analysis. Even though MBLL is effective in terms of providing qualitative comparison, it suffers from its applicability across tissue types and tissue dimensions. In this work, we introduce Lambert-W function-based modeling for light propagation in biological tissues, which is a generalized version of the Beer-Lambert model. The proposed modeling provides parametrization of tissue properties, which includes two attenuation coefficients μ0 and η. We validated our model against the Monte Carlo simulation, which is the gold standard for modeling NIR light propagation in biological tissue. We included numerous human and animal tissues to validate the proposed empirical model, including an inhomogeneous adult human head model. The proposed model, which has a closed form (analytical), is first of its kind in providing accurate modeling of NIR light propagation in biological tissues.
Global model of zenith tropospheric delay proposed based on EOF analysis
NASA Astrophysics Data System (ADS)
Sun, Langlang; Chen, Peng; Wei, Erhu; Li, Qinzheng
2017-07-01
Tropospheric delay is one of the main error budgets in Global Navigation Satellite System (GNSS) measurements. Many empirical correction models have been developed to compensate this delay, and models which do not require meteorological parameters have received the most attention. This study established a global troposphere zenith total delay (ZTD) model, called Global Empirical Orthogonal Function Troposphere (GEOFT), based on the empirical orthogonal function (EOF, also known as geographically weighted PCAs) analysis method and the Global Geodetic Observing System (GGOS) Atmosphere data from 2012 to 2015. The results showed that ZTD variation could be well represented by the characteristics of the EOF base function Ek and associated coefficients Pk. Here, E1 mainly signifies the equatorial anomaly; E2 represents north-south asymmetry, and E3 and E4 reflects regional variation. Moreover, P1 mainly reflects annual and semiannual variation components; P2 and P3 mainly contains annual variation components, and P4 displays semiannual variation components. We validated the proposed GEOFT model using tropospheric delay data of GGOS ZTD grid data and the tropospheric product of the International GNSS Service (IGS) over the year 2016. The results showed that GEOFT model has high accuracy with bias and RMS of -0.3 and 3.9 cm, respectively, with respect to the GGOS ZTD data, and of -0.8 and 4.1 cm, respectively, with respect to the global IGS tropospheric product. The accuracy of GEOFT demonstrating that the use of the EOF analysis method to characterize ZTD variation is reasonable.
Prediction of plant lncRNA by ensemble machine learning classifiers.
Simopoulos, Caitlin M A; Weretilnyk, Elizabeth A; Golding, G Brian
2018-05-02
In plants, long non-protein coding RNAs are believed to have essential roles in development and stress responses. However, relative to advances on discerning biological roles for long non-protein coding RNAs in animal systems, this RNA class in plants is largely understudied. With comparatively few validated plant long non-coding RNAs, research on this potentially critical class of RNA is hindered by a lack of appropriate prediction tools and databases. Supervised learning models trained on data sets of mostly non-validated, non-coding transcripts have been previously used to identify this enigmatic RNA class with applications largely focused on animal systems. Our approach uses a training set comprised only of empirically validated long non-protein coding RNAs from plant, animal, and viral sources to predict and rank candidate long non-protein coding gene products for future functional validation. Individual stochastic gradient boosting and random forest classifiers trained on only empirically validated long non-protein coding RNAs were constructed. In order to use the strengths of multiple classifiers, we combined multiple models into a single stacking meta-learner. This ensemble approach benefits from the diversity of several learners to effectively identify putative plant long non-coding RNAs from transcript sequence features. When the predicted genes identified by the ensemble classifier were compared to those listed in GreeNC, an established plant long non-coding RNA database, overlap for predicted genes from Arabidopsis thaliana, Oryza sativa and Eutrema salsugineum ranged from 51 to 83% with the highest agreement in Eutrema salsugineum. Most of the highest ranking predictions from Arabidopsis thaliana were annotated as potential natural antisense genes, pseudogenes, transposable elements, or simply computationally predicted hypothetical protein. Due to the nature of this tool, the model can be updated as new long non-protein coding transcripts are identified and functionally verified. This ensemble classifier is an accurate tool that can be used to rank long non-protein coding RNA predictions for use in conjunction with gene expression studies. Selection of plant transcripts with a high potential for regulatory roles as long non-protein coding RNAs will advance research in the elucidation of long non-protein coding RNA function.
NASA Astrophysics Data System (ADS)
Choji, Niri Martha; Sek, Siok Kun
2017-11-01
The purchasing power parity theory says that the trade rates among two nations ought to be equivalent to the proportion of the total price levels between the two nations. For more than a decade, there has been substantial interest in testing for the validity of the Purchasing Power Parity (PPP) empirically. This paper performs a series of tests to see if PPP is valid for ASEAN-5 nations for the period of 2000-2016 using monthly data. For this purpose, we conducted four different tests of stationarity, two cointegration tests (Pedroni and Westerlund), and also the VAR model. The stationarity (unit root) tests reveal that the variables are not stationary at levels however stationary at first difference. Cointegration test results did not reject the H0 of no cointegration implying the absence long-run association among the variables and results of the VAR model did not reveal a strong short-run relationship. Based on the data, we, therefore, conclude that PPP is not valid in long-and short-run for ASEAN-5 during 2000-2016.
Monzani, Dario; Steca, Patrizia; Greco, Andrea
2014-02-01
Dispositional optimism is an individual difference promoting psychosocial adjustment and well-being during adolescence. Dispositional optimism was originally defined as a one-dimensional construct; however, empirical evidence suggests two correlated factors in the Life Orientation Test - Revised (LOT-R). The main aim of the study was to evaluate the dimensionality of the LOT-R. This study is the first attempt to identify the best factor structure, comparing congeneric, two correlated-factor, and two orthogonal-factor models in a sample of adolescents. Concurrent validity was also assessed. The results demonstrated the superior fit of the two orthogonal-factor model thus reconciling the one-dimensional definition of dispositional optimism with the bi-dimensionality of the LOT-R. Moreover, the results of correlational analyses proved the concurrent validity of this self-report measure: optimism is moderately related to indices of psychosocial adjustment and well-being. Thus, the LOT-R is a useful, valid, and reliable self-report measure to properly assess optimism in adolescence. Copyright © 2013 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Goulden, T.; Hopkinson, C.
2013-12-01
The quantification of LiDAR sensor measurement uncertainty is important for evaluating the quality of derived DEM products, compiling risk assessment of management decisions based from LiDAR information, and enhancing LiDAR mission planning capabilities. Current quality assurance estimates of LiDAR measurement uncertainty are limited to post-survey empirical assessments or vendor estimates from commercial literature. Empirical evidence can provide valuable information for the performance of the sensor in validated areas; however, it cannot characterize the spatial distribution of measurement uncertainty throughout the extensive coverage of typical LiDAR surveys. Vendor advertised error estimates are often restricted to strict and optimal survey conditions, resulting in idealized values. Numerical modeling of individual pulse uncertainty provides an alternative method for estimating LiDAR measurement uncertainty. LiDAR measurement uncertainty is theoretically assumed to fall into three distinct categories, 1) sensor sub-system errors, 2) terrain influences, and 3) vegetative influences. This research details the procedures for numerical modeling of measurement uncertainty from the sensor sub-system (GPS, IMU, laser scanner, laser ranger) and terrain influences. Results show that errors tend to increase as the laser scan angle, altitude or laser beam incidence angle increase. An experimental survey over a flat and paved runway site, performed with an Optech ALTM 3100 sensor, showed an increase in modeled vertical errors of 5 cm, at a nadir scan orientation, to 8 cm at scan edges; for an aircraft altitude of 1200 m and half scan angle of 15°. In a survey with the same sensor, at a highly sloped glacial basin site absent of vegetation, modeled vertical errors reached over 2 m. Validation of error models within the glacial environment, over three separate flight lines, respectively showed 100%, 85%, and 75% of elevation residuals fell below error predictions. Future work in LiDAR sensor measurement uncertainty must focus on the development of vegetative error models to create more robust error prediction algorithms. To achieve this objective, comprehensive empirical exploratory analysis is recommended to relate vegetative parameters to observed errors.
Empirical scaling laws for coronal heating
NASA Technical Reports Server (NTRS)
Golub, L.
1983-01-01
The origins and uses of scaling laws in studies of stellar outer atmospheres are reviewed with particular emphasis on the properties of coronal loops. Some evidence is presented for a fundamental structuring of the solar corona and the thermodynamics of scaling laws are discussed. It is found that magnetic field-related scaling laws can be obtained by relating coronal pressure, temperature, and magnetic field strength. Available data validate this method. Some parameters of the theory, however, must be treated as adjustable, and it is considered necessary to examine data from other stars in order to determine the validity of the parameters. Using detailed observational data, the applicability of single loop models is examined.
Alliance ruptures and rupture resolution in cognitive-behavior therapy: a preliminary task analysis.
Aspland, Helen; Llewelyn, Susan; Hardy, Gillian E; Barkham, Michael; Stiles, William
2008-11-01
An initial ideal, rational model of alliance rupture and rupture resolution provided by cognitive-behavioral therapy (CBT) experts was assessed and compared with empirical observations of ruptures and their resolution in two cases of successful CBT. The initial rational model emphasized nondefensive acknowledgment and exploration of the rupture. Results indicated differences between what therapists think they should do to resolve ruptures and what they actually do and suggested that the rational model should be expanded to emphasize client validation and empowerment. Therapists' ability to attend to ruptures emerged as an important clinical skill.
Lichtenberg, Peter A.; Ocepek-Welikson, Katja; Ficker, Lisa J.; Gross, Evan; Rahman-Filipiak, Analise; Teresi, Jeanne A.
2017-01-01
Objectives The objectives of this study were threefold: (1) to empirically test the conceptual model proposed by the Lichtenberg Financial Decision Rating Scale (LFDRS); (2) to examine the psychometric properties of the LFDRS contextual factors in financial decision-making by investigating both the reliability and convergent validity of the subscales and total scale, and (3) extending previous work on the scale through the collection of normative data on financial decision-making. Methods A convenience sample of 200 independent function and community dwelling older adults underwent cognitive and financial management testing and were interviewed using the LFDRS. Confirmatory factor analysis, internal consistency measures, and hierarchical regression were used in a sample of 200 community-dwelling older adults, all of whom were making or had recently made a significant financial decision. Results Results confirmed the scale’s reliability and supported the conceptual model. Convergent validity analyses indicate that as hypothesized, cognition is a significant predictor of risk scores. Financial management scores, however, were not predictive of decision-making risk scores. Conclusions The psychometric properties of the LFDRS support the scale’s use as it was proposed in Lichtenberg et al., 2015. Clinical Implications The LFDRS instructions and scale are provided for clinicians to use in financial capacity assessments. PMID:29077531
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several "science of science" theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several “science of science” theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data. PMID:23323212
NASA Astrophysics Data System (ADS)
Sun, Xiaoling; Kaur, Jasleen; Milojević, Staša; Flammini, Alessandro; Menczer, Filippo
2013-01-01
The birth and decline of disciplines are critical to science and society. How do scientific disciplines emerge? No quantitative model to date allows us to validate competing theories on the different roles of endogenous processes, such as social collaborations, and exogenous events, such as scientific discoveries. Here we propose an agent-based model in which the evolution of disciplines is guided mainly by social interactions among agents representing scientists. Disciplines emerge from splitting and merging of social communities in a collaboration network. We find that this social model can account for a number of stylized facts about the relationships between disciplines, scholars, and publications. These results provide strong quantitative support for the key role of social interactions in shaping the dynamics of science. While several ``science of science'' theories exist, this is the first account for the emergence of disciplines that is validated on the basis of empirical data.
Krueger, Robert F; Tackett, Jennifer L; MacDonald, Angus
2016-11-01
Traditionally, psychopathology has been conceptualized in terms of polythetic categories derived from committee deliberations and enshrined in authoritative psychiatric nosologies-most notably the Diagnostic and Statistical Manual of Mental Disorders (DSM; American Psychiatric Association [APA], 2013). As the limitations of this form of classification have become evident, empirical data have been increasingly relied upon to investigate the structure of psychopathology. These efforts have borne fruit in terms of an increasingly consistent set of psychopathological constructs closely connected with similar personality constructs. However, the work of validating these constructs using convergent sources of data is an ongoing enterprise. This special section collects several new efforts to use structural approaches to study the validity of this empirically based organizational scheme for psychopathology. Inasmuch as a structural approach reflects the natural organization of psychopathology, it has great potential to facilitate comprehensive organization of information on the correlates of psychopathology, providing evidence for the convergent and discriminant validity of an empirical approach to classification. Here, we highlight several themes that emerge from this burgeoning literature. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling; ...
2018-03-25
In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less
A New 1DVAR Retrieval for AMSR2 and GMI: Validation and Sensitivites
NASA Astrophysics Data System (ADS)
Duncan, D.; Kummerow, C. D.
2015-12-01
A new non-raining retrieval has been developed for microwave imagers and applied to the GMI and AMSR2 sensors. With the Community Radiative Transfer Model (CRTM) as the forward model for the physical retrieval, a 1-dimensional variational method finds the atmospheric state which minimizes the difference between observed and simulated brightness temperatures. A key innovation of the algorithm development is a method to calculate the sensor error covariance matrix that is specific to the forward model employed and includes off-diagonal elements, allowing the algorithm to handle various forward models and sensors with little cross-talk. The water vapor profile is resolved by way of empirical orthogonal functions (EOFs) and then summed to get total precipitable water (TPW). Validation of retrieved 10m wind speed, TPW, and sea surface temperature (SST) is performed via comparison with buoys and radiosondes as well as global models and other remotely sensed products. In addition to the validation, sensitivity experiments investigate the impact of ancillary data on the under-constrained retrieval, a concern for climate data records that strive to be independent of model biases. The introduction of model analysis data is found to aid the algorithm most at high frequency channels and affect TPW retrievals, whereas wind and cloud water retrievals show little effect from ingesting further ancillary data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Wenxiao; Galvin, Janine; Huang, Wei Ling
In this paper we aim to develop a validated device-scale CFD model that can predict quantitatively both hydrodynamics and CO 2 capture efficiency for an amine-based solvent absorber column with random Pall ring packing. A Eulerian porous-media approach and a two-fluid model were employed, in which the momentum and mass transfer equations were closed by literature-based empirical closure models. We proposed a hierarchical approach for calibrating the parameters in the closure models to make them accurate for the packed column. Specifically, a parameter for momentum transfer in the closure was first calibrated based on data from a single experiment. Withmore » this calibrated parameter, a parameter in the closure for mass transfer was next calibrated under a single operating condition. Last, the closure of the wetting area was calibrated for each gas velocity at three different liquid flow rates. For each calibration, cross validations were pursued using the experimental data under operating conditions different from those used for calibrations. This hierarchical approach can be generally applied to develop validated device-scale CFD models for different absorption columns.« less
NASA Astrophysics Data System (ADS)
Yusliana Ekawati, Elvin
2017-01-01
This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)
Evaluation of the US DOE's conceptual model of hydrothermal activity at Yucca Mountain, Nevada
NASA Astrophysics Data System (ADS)
Dublyansky, Y. V.
2014-08-01
A unique conceptual model describing the conductive heating of rocks in the thick unsaturated zone of Yucca Mountain, Nevada by a silicic pluton emplaced several kilometers away is accepted by the US Department of Energy (DOE) as an explanation of the elevated depositional temperatures measured in fluid inclusions in secondary fluorite and calcite. Acceptance of this model allowed the DOE to keep from considering hydrothermal activity in the performance assessment of the proposed high-level nuclear waste disposal facility. The evaluation presented in this paper shows that no computational modeling results have yet produced a satisfactory match with the empirical benchmark data, specifically with age and fluid inclusion data that indicate high temperatures (up to ca. 80 °C) in the unsaturated zone of Yucca Mountain. Auxiliary sub-models complementing the DOE model, as well as observations at a natural analog site, have also been evaluated. Summarily, the model cannot be considered as validated. Due to the lack of validation, the reliance on this model must be discontinued and the appropriateness of decisions which rely on this model must be re-evaluated.
Defying Intuition: Demonstrating the Importance of the Empirical Technique.
ERIC Educational Resources Information Center
Kohn, Art
1992-01-01
Describes a classroom activity featuring a simple stay-switch probability game. Contends that the exercise helps students see the importance of empirically validating beliefs. Includes full instructions for conducting and discussing the exercise. (CFR)
NASA Technical Reports Server (NTRS)
Gallagher, Dennis L.; Craven, Paul D.; Comfort, Richard H.
1999-01-01
Over 40 years of ground and spacecraft plasmaspheric measurements have resulted in many statistical descriptions of plasmaspheric properties. In some cases, these properties have been represented as analytical descriptions that are valid for specific regions or conditions. For the most part, what has not been done is to extend regional empirical descriptions or models to the plasmasphere as a whole. In contrast, many related investigations depend on the use of representative plasmaspheric conditions throughout the inner magnetosphere. Wave propagation, involving the transport of energy through the magnetosphere, is strongly affected by thermal plasma density and its composition. Ring current collisional and wave particle losses also strongly depend on these quantities. Plasmaspheric also plays a secondary role in influencing radio signals from the Global Positioning System satellites. The Global Core Plasma Model (GCPM) is an attempt to assimilate previous empirical evidence and regional models for plasmaspheric density into a continuous, smooth model of thermal plasma density in the inner magnetosphere. In that spirit, the International Reference Ionosphere is currently used to complete the low altitude description of density and composition in the model. The models and measurements on which the GCPM is currently based and its relationship to IRI will be discussed.
Empirical membrane lifetime model for heavy duty fuel cell systems
NASA Astrophysics Data System (ADS)
Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik
2016-12-01
Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.
Ruan, Bin; Mok, Magdalena Mo Ching; Edginton, Christopher R; Chin, Ming Kai
2012-01-01
This article describes the development and validation of the Core Competencies Scale (CCS) using Bok's (2006) competency framework for undergraduate education. The framework included: communication, critical thinking, character development, citizenship, diversity, global understanding, widening of interest, and career and vocational development. The sample comprised 70 college and university students. Results of analysis using Rasch rating scale modelling showed that there was strong empirical evidence on the validity of the measures in contents, structure, interpretation, generalizability, and response options of the CCS scale. The implication of having developed Rasch-based valid and dependable measures in this study for gauging the value added of college and university education to their students is that the feedback generated from CCS will enable evidence-based decision and policy making to be implemented and strategized. Further, program effectiveness can be measured and thus accountability on the achievement of the program objectives.
Cross-validating a bidimensional mathematics anxiety scale.
Haiyan Bai
2011-03-01
The psychometric properties of a 14-item bidimensional Mathematics Anxiety Scale-Revised (MAS-R) were empirically cross-validated with two independent samples consisting of 647 secondary school students. An exploratory factor analysis on the scale yielded strong construct validity with a clear two-factor structure. The results from a confirmatory factor analysis indicated an excellent model-fit (χ(2) = 98.32, df = 62; normed fit index = .92, comparative fit index = .97; root mean square error of approximation = .04). The internal consistency (.85), test-retest reliability (.71), interfactor correlation (.26, p < .001), and positive discrimination power indicated that MAS-R is a psychometrically reliable and valid instrument for measuring mathematics anxiety. Math anxiety, as measured by MAS-R, correlated negatively with student achievement scores (r = -.38), suggesting that MAS-R may be a useful tool for classroom teachers and other educational personnel tasked with identifying students at risk of reduced math achievement because of anxiety.
ERIC Educational Resources Information Center
Sritanyarat, Dawisa; Russ-Eft, Darlene
2016-01-01
This study proposed empirical indicators which can be validated and adopted in higher education institutions to evaluate quality of teaching and learning, and to serve as an evaluation criteria for human resource management and development of higher institutions in Thailand. The main purpose of this study was to develop empirical indicators of a…
Self-determination, smoking, diet and health.
Williams, Geoffrey C; Minicucci, Daryl S; Kouides, Ruth W; Levesque, Chantal S; Chirkov, Valery I; Ryan, Richard M; Deci, Edward L
2002-10-01
A Clinical Trial will test (1) a Self-Determination Theory (SDT) model of maintained smoking cessation and diet improvement, and (2) an SDT intervention, relative to usual care, for facilitating maintained behavior change and decreasing depressive symptoms for those who quit smoking. SDT is the only empirically derived theory which emphasizes patient autonomy and has a validated measure for each of its constructs, and this is the first trial to evaluate an SDT intervention. Adult smokers will be stratified for whether they are at National Cholesterol Education Program (1996) recommended goal for low-density lipoprotein cholesterol (LDL-C). Those with elevated LDL-C will be studied for diet improvement as well as smoking cessation. Six-month interventions involve a behavior-change counselor using principles of SDT to facilitate autonomous motivation and perceived competence for healthier behaving. Cotinine-validated smoking cessation and LDL-C-validated dietary recall of reduced fat intake, as well as depressive symptoms, will be assessed at 6 and 18 months. Structural equation modeling will test the model for both behaviors within the intervention and usual-care conditions.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
A business model analysis of telecardiology service.
Lin, Shu-Hsia; Liu, Jorn-Hon; Wei, Jen; Yin, Wei-Hsian; Chen, Hung-Hsin; Chiu, Wen-Ta
2010-12-01
Telecare has become an increasingly common medical service in recent years. However, new service must be close to the market and be market-driven to have a high likelihood of success. This article analyzes the business model of a telecardiology service managed by a general hospital. The methodology of the article is as follows: (1) initially it describes the elements of the service based on the ontology of the business model, (2) then it transfers these elements into the choices for business model dynamic loops and examines their validity, and (3) finally provides an empirical financial analysis of the service to assess the profit-making possibilities.
NASA Astrophysics Data System (ADS)
Shanmugam, Palanisamy; Varunan, Theenathayalan; Nagendra Jaiganesh, S. N.; Sahay, Arvind; Chauhan, Prakash
2016-06-01
Prediction of the curve of the absorption coefficient of colored dissolved organic matter (CDOM) and differentiation between marine and terrestrially derived CDOM pools in coastal environments are hampered by a high degree of variability in the composition and concentration of CDOM, uncertainties in retrieved remote sensing reflectance and the weak signal-to-noise ratio of space-borne instruments. In the present study, a hybrid model is presented along with empirical methods to remotely determine the amount and type of CDOM in coastal and inland water environments. A large set of in-situ data collected on several oceanographic cruises and field campaigns from different regional waters was used to develop empirical methods for studying the distribution and dynamics of CDOM, dissolved organic carbon (DOC) and salinity. Our validation analyses demonstrated that the hybrid model is a better descriptor of CDOM absorption spectra compared to the existing models. Additional spectral slope parameters included in the present model to differentiate between terrestrially derived and marine CDOM pools make a substantial improvement over those existing models. Empirical algorithms to derive CDOM, DOC and salinity from remote sensing reflectance data demonstrated success in retrieval of these products with significantly low mean relative percent differences from large in-situ measurements. The performance of these algorithms was further assessed using three hyperspectral HICO images acquired simultaneously with our field measurements in productive coastal and lagoon waters on the southeast part of India. The validation match-ups of CDOM and salinity showed good agreement between HICO retrievals and field observations. Further analyses of these data showed significant temporal changes in CDOM and phytoplankton absorption coefficients with a distinct phase shift between these two products. Healthy phytoplankton cells and macrophytes were recognized to directly contribute to the autochthonous production of colored humic-like substances in variable amounts within the lagoon system, despite CDOM content being partly derived through river run-off and wetland discharges as well as from conservative mixing of different water masses. Spatial and temporal maps of CDOM, DOC and salinity products provided an interesting insight into these CDOM dynamics and conservative behavior within the lagoon and its extension in coastal and offshore waters of the Bay of Bengal. The hybrid model and empirical algorithms presented here can be useful to assess CDOM, DOC and salinity fields and their changes in response to increasing runoff of nutrient pollution, anthropogenic activities, hydrographic variations and climate oscillations.
NASA Astrophysics Data System (ADS)
Kim, G.; Che, I. Y.
2017-12-01
We evaluated relationship among source parameters of underground nuclear tests in northern Korean Peninsula using regional seismic data. Dense global and regional seismic networks are incorporated to measure locations and origin times precisely. Location analyses show that distance among the locations is tiny on a regional scale. The tiny location-differences validate a linear model assumption. We estimated source spectral ratios by excluding path effects based spectral ratios of the observed seismograms. We estimated empirical relationship among depth of burials and yields based on theoretical source models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beres, W.; Koul, A.K.
1994-09-01
Stress intensity factors for thru-thickness and thumb-nail cracks in the double edge notch specimens, containing two different notch radius (R) to specimen width (W) ratios (R/W = 1/8 and 1/16), are calculated through finite element analysis. The finite element results are compared with predictions based on existing empirical models for SIF calculations. The effects of a change in R/W ratio on SIF of thru-thickness and thumb-nail cracks are also discussed. 34 refs.
Preliminary Development and Validation of the Mindful Student Questionnaire
ERIC Educational Resources Information Center
Renshaw, Tyler L.
2017-01-01
Research validating mindfulness-based interventions with youths and in schools is growing, yet research validating measures of youths' mindfulness in schools has received far less empirical attention. The present study makes the case for and reports on the preliminary development and validation of a new, 15-item, multidimensional, self-report…
Magnetopause Standoff Position Changes and Geosynchronous Orbit Crossings: Models and Observations
NASA Astrophysics Data System (ADS)
Collado-Vega, Y. M.; Rastaetter, L.; Sibeck, D. G.
2017-12-01
The Earth's magnetopause is the boundary that mostly separates the solar wind with the Earth's magnetosphere. Its location has been studied and estimated via simulation models, observational data and empirical models. This research aims to study the changes of the magnetopause standoff location due to different solar wind conditions using a combination of all the different methods. We will use the Run-On-Request capabilities within the MHD models available from the Community Coordinated Modeling Center (CCMC) at NASA Goddard Space Flight Center, specifically BATS-R-US (SWMF), OpenGGCM, LFM and GUMICS models. The magnetopause standoff position prediction and response time to the solar wind changes will then be compared to results from available empirical models (e.g. Shue et al. 1998), and to THEMIS, Cluster, Geotail and MMS missions magnetopause crossing observations. We will also use times of extreme solar wind conditions where magnetopause crossings have been observed by the GOES satellites. Rigorous analysis/comparison of observations and empirical models is critical in determining magnetosphere dynamics for model validation. This research goes also hand in hand with the efforts of the working group at the CCMC/LWS International Forum for Space Weather Capabilities Assessment workshop that aims to analyze different events to define metrics for model-data comparison. Preliminary results of this particular research show that there are some discrepancies between the MHD models standoff positions of the dayside magnetopause for the same solar wind conditions that include an increase in solar wind dynamic pressure and a step function in the IMF Bz component. In cases of nominal solar wind conditions, it has been observed that the models do mostly agree with the observational data from the different satellite missions.
Imhoff, Roland; Lange, Jens; Germar, Markus
2018-02-22
Spatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants' primary task is. In one variant, participants indicate the location of the target (location task), whereas in the other they indicate the shape of the target (identification task). In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies (total N = 397; two in the supplement) using both variants and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity effects in location tasks to a greater extent than in identification tasks. Across all variants of the task, we fail to replicate the effect of greater cue validity effects for arousing (vs. neutral) stimuli. We discuss the implications of these findings for best practice in spatial cueing research.
Modeling Addictive Consumption as an Infectious Disease*
Alamar, Benjamin; Glantz, Stanton A.
2011-01-01
The dominant model of addictive consumption in economics is the theory of rational addiction. The addict in this model chooses how much they are going to consume based upon their level of addiction (past consumption), the current benefits and all future costs. Several empirical studies of cigarette sales and price data have found a correlation between future prices and consumption and current consumption. These studies have argued that the correlation validates the rational addiction model and invalidates any model in which future consumption is not considered. An alternative to the rational addiction model is one in which addiction spreads through a population as if it were an infectious disease, as supported by the large body of empirical research of addictive behaviors. In this model an individual's probability of becoming addicted to a substance is linked to the behavior of their parents, friends and society. In the infectious disease model current consumption is based only on the level of addiction and current costs. Price and consumption data from a simulation of the infectious disease model showed a qualitative match to the results of the rational addiction model. The infectious disease model can explain all of the theoretical results of the rational addiction model with the addition of explaining initial consumption of the addictive good. PMID:21339848
Akram, Waqas; Hussein, Maryam S E; Ahmad, Sohail; Mamat, Mohd N; Ismail, Nahlah E
2015-10-01
There is no instrument which collectively assesses the knowledge, attitude and perceived practice of asthma among community pharmacists. Therefore, this study aimed to validate the instrument which measured the knowledge, attitude and perceived practice of asthma among community pharmacists by producing empirical evidence of validity and reliability of the items using Rasch model (Bond & Fox software®) for dichotomous and polytomous data. This baseline study recruited 33 community pharmacists from Penang, Malaysia. The results showed that all PTMEA Corr were in positive values, where an item was able to distinguish between the ability of respondents. Based on the MNSQ infit and outfit range (0.60-1.40), out of 55 items, 2 items from the instrument were suggested to be removed. The findings indicated that the instrument fitted with Rasch measurement model and showed the acceptable reliability values of 0.88 and 0.83 and 0.79 for knowledge, attitude and perceived practice respectively.
Multilevel corporate environmental responsibility.
Karassin, Orr; Bar-Haim, Aviad
2016-12-01
The multilevel empirical study of the antecedents of corporate social responsibility (CSR) has been identified as "the first knowledge gap" in CSR research. Based on an extensive literature review, the present study outlines a conceptual multilevel model of CSR, then designs and empirically validates an operational multilevel model of the principal driving factors affecting corporate environmental responsibility (CER), as a measure of CSR. Both conceptual and operational models incorporate three levels of analysis: institutional, organizational, and individual. The multilevel nature of the design allows for the assessment of the relative importance of the levels and of their components in the achievement of CER. Unweighted least squares (ULS) regression analysis reveals that the institutional-level variables have medium relationships with CER, some variables having a negative effect. The organizational level is revealed as having strong and positive significant relationships with CER, with organizational culture and managers' attitudes and behaviors as significant driving forces. The study demonstrates the importance of multilevel analysis in improving the understanding of CSR drivers, relative to single level models, even if the significance of specific drivers and levels may vary by context. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.
Moal, Iain H; Bates, Paul A
2012-01-01
The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
Seismic facies analysis based on self-organizing map and empirical mode decomposition
NASA Astrophysics Data System (ADS)
Du, Hao-kun; Cao, Jun-xing; Xue, Ya-juan; Wang, Xing-jian
2015-01-01
Seismic facies analysis plays an important role in seismic interpretation and reservoir model building by offering an effective way to identify the changes in geofacies inter wells. The selections of input seismic attributes and their time window have an obvious effect on the validity of classification and require iterative experimentation and prior knowledge. In general, it is sensitive to noise when waveform serves as the input data to cluster analysis, especially with a narrow window. To conquer this limitation, the Empirical Mode Decomposition (EMD) method is introduced into waveform classification based on SOM. We first de-noise the seismic data using EMD and then cluster the data using 1D grid SOM. The main advantages of this method are resolution enhancement and noise reduction. 3D seismic data from the western Sichuan basin, China, are collected for validation. The application results show that seismic facies analysis can be improved and better help the interpretation. The powerful tolerance for noise makes the proposed method to be a better seismic facies analysis tool than classical 1D grid SOM method, especially for waveform cluster with a narrow window.
Inferring causal molecular networks: empirical assessment through a community-based effort
Hill, Steven M.; Heiser, Laura M.; Cokelaer, Thomas; Unger, Michael; Nesser, Nicole K.; Carlin, Daniel E.; Zhang, Yang; Sokolov, Artem; Paull, Evan O.; Wong, Chris K.; Graim, Kiley; Bivol, Adrian; Wang, Haizhou; Zhu, Fan; Afsari, Bahman; Danilova, Ludmila V.; Favorov, Alexander V.; Lee, Wai Shing; Taylor, Dane; Hu, Chenyue W.; Long, Byron L.; Noren, David P.; Bisberg, Alexander J.; Mills, Gordon B.; Gray, Joe W.; Kellen, Michael; Norman, Thea; Friend, Stephen; Qutub, Amina A.; Fertig, Elana J.; Guan, Yuanfang; Song, Mingzhou; Stuart, Joshua M.; Spellman, Paul T.; Koeppl, Heinz; Stolovitzky, Gustavo; Saez-Rodriguez, Julio; Mukherjee, Sach
2016-01-01
Inferring molecular networks is a central challenge in computational biology. However, it has remained unclear whether causal, rather than merely correlational, relationships can be effectively inferred in complex biological settings. Here we describe the HPN-DREAM network inference challenge that focused on learning causal influences in signaling networks. We used phosphoprotein data from cancer cell lines as well as in silico data from a nonlinear dynamical model. Using the phosphoprotein data, we scored more than 2,000 networks submitted by challenge participants. The networks spanned 32 biological contexts and were scored in terms of causal validity with respect to unseen interventional data. A number of approaches were effective and incorporating known biology was generally advantageous. Additional sub-challenges considered time-course prediction and visualization. Our results constitute the most comprehensive assessment of causal network inference in a mammalian setting carried out to date and suggest that learning causal relationships may be feasible in complex settings such as disease states. Furthermore, our scoring approach provides a practical way to empirically assess the causal validity of inferred molecular networks. PMID:26901648
NASA Technical Reports Server (NTRS)
Lee, Katharine K.; Kerns, Karol; Bone, Randall
2001-01-01
The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.
NASA Astrophysics Data System (ADS)
Ribeiro, Haroldo V.
2015-03-01
Since the seminal works of Wilson and Kelling [1] in 1982, the "broken windows theory" seems to have been widely accepted among the criminologists and, in fact, empirical findings actually point out that criminals tend to return to previously visited locations. Crime has always been part of the urban society's agenda and has also attracted the attention of scholars from social sciences ever since. Furthermore, over the past six decades the world has experienced a quick and notorious urbanization process: by the eighties the urban population was about 40% of total population, and today more than half (54%) of the world population is urban [2]. The urbanization has brought us many benefits such as better working opportunities and health care, but has also created several problems such as pollution and a considerable rise in the criminal activities. In this context of urban problems, crime deserves a special attention because there is a huge necessity of empirical and mathematical (modeling) investigations which, apart from the natural academic interest, may find direct implications for the organization of our society by improving political decisions and resource allocation.
Reniers, G L L; Audenaert, A; Pauwels, N; Soudan, K
2011-02-15
This article empirically assesses and validates a methodology to make evacuation decisions in case of major fire accidents in chemical clusters. In this paper, a number of empirical results are presented, processed and discussed with respect to the implications and management of evacuation decisions in chemical companies. It has been shown in this article that in realistic industrial settings, suboptimal interventions may result in case the prospect to obtain additional information at later stages of the decision process is ignored. Empirical results also show that implications of interventions, as well as the required time and workforce to complete particular shutdown activities, may be very different from one company to another. Therefore, to be optimal from an economic viewpoint, it is essential that precautionary evacuation decisions are tailor-made per company. Copyright © 2010 Elsevier B.V. All rights reserved.
Testing a new Free Core Nutation empirical model
NASA Astrophysics Data System (ADS)
Belda, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald
2016-03-01
The Free Core Nutation (FCN) is a free mode of the Earth's rotation caused by the different material characteristics of the Earth's core and mantle. This causes the rotational axes of those layers to slightly diverge from each other, resulting in a wobble of the Earth's rotation axis comparable to nutations. In this paper we focus on estimating empirical FCN models using the observed nutations derived from the VLBI sessions between 1993 and 2013. Assuming a fixed value for the oscillation period, the time-variable amplitudes and phases are estimated by means of multiple sliding window analyses. The effects of using different a priori Earth Rotation Parameters (ERP) in the derivation of models are also addressed. The optimal choice of the fundamental parameters of the model, namely the window width and step-size of its shift, is searched by performing a thorough experimental analysis using real data. The former analyses lead to the derivation of a model with a temporal resolution higher than the one used in the models currently available, with a sliding window reduced to 400 days and a day-by-day shift. It is shown that this new model increases the accuracy of the modeling of the observed Earth's rotation. Besides, empirical models determined from USNO Finals as a priori ERP present a slightly lower Weighted Root Mean Square (WRMS) of residuals than IERS 08 C04 along the whole period of VLBI observations, according to our computations. The model is also validated through comparisons with other recognized models. The level of agreement among them is satisfactory. Let us remark that our estimates give rise to the lowest residuals and seem to reproduce the FCN signal in more detail.
Psychiatric diagnosis - is it universal or relative to culture?
Canino, Glorisa; Alegría, Margarita
2008-03-01
There is little consensus on the extent to which psychiatric disorders or syndromes are universal or the extent to which they differ on their core definitions and constellation of symptoms as a result of cultural or contextual factors. This controversy continues due to the lack of biological markers, imprecise measurement and the lack of a gold standard for validating most psychiatric conditions. Empirical studies were used to present evidence in favor of or against a universalist or relativistic view of child psychiatric disorders using a model developed by Robins and Guze to determine the validity of psychiatric disorders. The prevalence of some of the most common specific disorders and syndromes as well as its risk and protective factors vary across cultures, yet comorbid patterns and response to treatments vary little across cultures. Cross-cultural longitudinal data on outcomes is equivocal. The cross-cultural validity of child disorders may vary drastically depending on the disorder, but empirical evidence that attests for the cross-cultural validity of diagnostic criteria for each child disorder is lacking. There is a need for studies that investigate the extent to which gene-environment interactions are related to specific disorders across cultures. Clinicians are urged to consider culture and context in determining the way in which children's psychopathology may be manifested independent of their views. Recommendations for the upcoming classificatory system are provided so that practical or theoretical considerations are addressed about how culture and ethnic issues affect the assessment or treatment of specific disorders in children.
Human growth and body weight dynamics: an integrative systems model.
Rahmandad, Hazhir
2014-01-01
Quantifying human weight and height dynamics due to growth, aging, and energy balance can inform clinical practice and policy analysis. This paper presents the first mechanism-based model spanning full individual life and capturing changes in body weight, composition and height. Integrating previous empirical and modeling findings and validated against several additional empirical studies, the model replicates key trends in human growth including A) Changes in energy requirements from birth to old ages. B) Short and long-term dynamics of body weight and composition. C) Stunted growth with chronic malnutrition and potential for catch up growth. From obesity policy analysis to treating malnutrition and tracking growth trajectories, the model can address diverse policy questions. For example I find that even without further rise in obesity, the gap between healthy and actual Body Mass Indexes (BMIs) has embedded, for different population groups, a surplus of 14%-24% in energy intake which will be a source of significant inertia in obesity trends. In another analysis, energy deficit percentage needed to reduce BMI by one unit is found to be relatively constant across ages. Accompanying documented and freely available simulation model facilitates diverse applications customized to different sub-populations.
Human Growth and Body Weight Dynamics: An Integrative Systems Model
Rahmandad, Hazhir
2014-01-01
Quantifying human weight and height dynamics due to growth, aging, and energy balance can inform clinical practice and policy analysis. This paper presents the first mechanism-based model spanning full individual life and capturing changes in body weight, composition and height. Integrating previous empirical and modeling findings and validated against several additional empirical studies, the model replicates key trends in human growth including A) Changes in energy requirements from birth to old ages. B) Short and long-term dynamics of body weight and composition. C) Stunted growth with chronic malnutrition and potential for catch up growth. From obesity policy analysis to treating malnutrition and tracking growth trajectories, the model can address diverse policy questions. For example I find that even without further rise in obesity, the gap between healthy and actual Body Mass Indexes (BMIs) has embedded, for different population groups, a surplus of 14%–24% in energy intake which will be a source of significant inertia in obesity trends. In another analysis, energy deficit percentage needed to reduce BMI by one unit is found to be relatively constant across ages. Accompanying documented and freely available simulation model facilitates diverse applications customized to different sub-populations. PMID:25479101
Nevers, M.B.; Whitman, R.L.
2008-01-01
To understand the fate and movement of Escherichia coli in beach water, numerous modeling studies have been undertaken including mechanistic predictions of currents and plumes and empirical modeling based on hydrometeorological variables. Most approaches are limited in scope by nearshore currents or physical obstacles and data limitations; few examine the issue from a larger spatial scale. Given the similarities between variables typically included in these models, we attempted to take a broader view of E. coli fluctuations by simultaneously examining twelve beaches along 35 km of Indiana's Lake Michigan coastline that includes five point-source outfalls. The beaches had similar E. coli fluctuations, and a best-fit empirical model included two variables: wave height and an interactive term comprised of wind direction and creek turbidity. Individual beach R2 was 0.32-0.50. Data training-set results were comparable to validation results (R2 = 0.48). Amount of variation explained by the model was similar to previous reports for individual beaches. By extending the modeling approach to include more coastline distance, broader-scale spatial and temporal changes in bacteria concentrations and the influencing factors can be characterized. ?? 2008 American Chemical Society.
NASA Astrophysics Data System (ADS)
Gok, R.; Kalafat, D.; Hutchings, L.
2003-12-01
We analyze over 3,500 aftershocks recorded by several seismic networks during the 1999 Marmara, Turkey earthquakes. The analysis provides source parameters of the aftershocks, a three-dimensional velocity structure from tomographic inversion, an input three-dimensional velocity model for a finite difference wave propagation code (E3D, Larsen 1998), and records available for use as empirical Green's functions. Ultimately our goal is to model the 1999 earthquakes from DC to 25 Hz and study fault rupture mechanics and kinematic rupture models. We performed the simultaneous inversion for hypocenter locations and three-dimensional P- and S- wave velocity structure of Marmara Region using SIMULPS14 along with 2,500 events with more than eight P- readings and an azimuthal gap of less than 180\\deg. The resolution of calculated velocity structure is better in the eastern Marmara than the western Marmara region due to the dense ray coverage. We used the obtained velocity structure as input into the finite difference algorithm and validated the model by using M < 4 earthquakes as point sources and matching long period waveforms (f < 0.5 Hz). We also obtained Mo, fc and individual station kappa values for over 500 events by performing a simultaneous inversion to fit these parameters with a Brune source model. We used the results of the source inversion to deconvolve out a Brune model from small to moderate size earthquakes (M < 4.0) to obtain empirical Green's function (EGF) for the higher frequency range of ground motion synthesis (0.5 < f > 25 Hz). We additionally obtained the source scaling relation (energy-moment) of these aftershocks. We have generated several scenarios constrained by a priori knowledge of the Izmit and Duzce rupture parameters to validate our prediction capability.
NASA Astrophysics Data System (ADS)
Cheng, Z.; Yu, X.; Hsu, T. J.; Calantoni, J.; Chauchat, J.
2016-02-01
Regional scale coastal evolution models do not explicitly resolve wave-driven sediment transport and must rely on bedload/suspended modules that utilize empirical assumptions. Under extreme wave events or in regions of high sediment heterogeneity, these empirical bedload/suspended load modules may need to be reevaluated with detailed observation and more sophisticated small-scale models. In the past decade, significant research efforts have been devoted to modeling sediment transport using multiphase Eulerian or Euler-Lagrangian approaches. Recently, an open-source multi-dimensional Reynolds-averaged two-phase sediment transport model, SedFOAM is developed by the authors and it has been adopted by many researchers to study momentary bed failure, granular rheology in sheet flow and scour around structures. In this abstract, we further report our recent progress made in extending the model with 3D turbulence-resolving capability and to model the sediment phase with the Discrete Element method (DEM). Adopting the large-eddy simulation methodology, we validate the 3D model with measured fine sediment transport is oscillatory sheet flow and demonstrate that the model is able to resolve sediment burst events during flow reversals. To better resolve the intergranular interactions and to model heterogeneous properties of sediment (e.g., mixed grain sizes and grain shape), we use an Euler-Lagrangian solver called CFDEM, which couples OpenFOAM for the fluid phase and LIGGGHTS for the particle phase. We improve the model by better enforcing conservation of mass in the pressure solver. The modified CFDEM solver is validated with measured oscillatory sheet flow data for coarse sand and we demonstrated that the model can reproduce the well-known armoring effects. We show that under Stokes second-order wave forcing, the armoring effect is more significant during the energetic positive peak, and hence the net onshore transport is reduced. Preliminary results modeling the shape effects using composite particles will be presented. This research is supported by Office of Naval Research and National Science Foundation.
Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian
2014-01-14
The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.
A global empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, J. M.; van Oldenborgh, G. J.; Hawkins, E.; Suckling, E. B.
2015-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
An empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, Jonathan; van Oldenborgh, Geert Jan; Hawkins, Ed; Suckling, Emma
2016-04-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Numerical and Qualitative Contrasts of Two Statistical Models ...
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-
Strand, Pia; Sjöborg, Karolina; Stalmeijer, Renée; Wichmann-Hansen, Gitte; Jakobsson, Ulf; Edgren, Gudrun
2013-12-01
There is a paucity of instruments designed to evaluate the multiple dimensions of the workplace as an educational environment for undergraduate medical students. The aim was to develop and psychometrically evaluate an instrument to measure how undergraduate medical students perceive the clinical workplace environment, based on workplace learning theories and empirical findings. Development of the instrument relied on established standards including theoretical and empirical grounding, systematic item development and expert review at various stages to ensure content validity. Qualitative and quantitative methods were employed using a series of steps from conceptualization through psychometric analysis of scores in a Swedish medical student population. The final result was a 25-item instrument with two overarching dimensions, experiential learning and social participation, and four subscales that coincided well with theory and empirical findings: Opportunities to learn in and through work & quality of supervision; Preparedness for student entry; Workplace interaction patterns & student inclusion; and Equal treatment. Evidence from various sources supported content validity, construct validity and reliability of the instrument. The Undergraduate Clinical Education Environment Measure represents a valid, reliable and feasible multidimensional instrument for evaluation of the clinical workplace as a learning environment for undergraduate medical students. Further validation in different populations using various psychometric methods is needed.
Factors influencing physicians' knowledge sharing on web medical forums.
Lin, Tung Cheng; Lai, Ming Cheng; Yang, Shu Wen
2016-09-01
Web medical forums are relatively unique as knowledge-sharing platforms because physicians participate exclusively as knowledge contributors and not as knowledge recipients. Using the perspective of social exchange theory and considering both extrinsic and intrinsic motivations, this study aims to elicit the factors that significantly influence the willingness of physicians to share professional knowledge on web medical forums and develops a research model to explore the motivations that underlie physicians' knowledge-sharing attitudes. This model hypothesizes that constructs, including shared vision, reputation, altruism, and self-efficacy, positively influence these attitudes and, by extension, positively impact knowledge-sharing intention. A conventional sampling method and the direct recruitment of physicians at their outpatient clinic gathered valid data from a total of 164 physicians for analysis in the model. The empirical results support the validity of the proposed model and identified shared vision as the most significant factor of influence on knowledge-sharing attitudes, followed in descending order by knowledge-sharing self-efficacy, reputation, and altruism. © The Author(s) 2015.
Lozano, Oscar M; Rojas, Antonio J; Pérez, Cristino; González-Sáiz, Francisco; Ballesta, Rosario; Izaskun, Bilbao
2008-05-01
The aim of this work is to show evidence of the validity of the Health-Related Quality of Life for Drug Abusers Test (HRQoLDA Test). This test was developed to measure specific HRQoL for drugs abusers, within the theoretical addiction framework of the biaxial model. The sample comprised 138 patients diagnosed with opiate drug dependence. In this study, the following constructs and variables of the biaxial model were measured: severity of dependence, physical health status, psychological adjustment and substance consumption. Results indicate that the HRQoLDA Test scores are related to dependency and consumption-related problems. Multiple regression analysis reveals that HRQoL can be predicted from drug dependence, physical health status and psychological adjustment. These results contribute empirical evidence of the theoretical relationships established between HRQoL and the biaxial model, and they support the interpretation of the HRQoLDA Test to measure HRQoL in drug abusers, thus providing a test to measure this specific construct in this population.
Validity of Gō models: comparison with a solvent-shielded empirical energy decomposition.
Paci, Emanuele; Vendruscolo, Michele; Karplus, Martin
2002-12-01
Do Gō-type model potentials provide a valid approach for studying protein folding? They have been widely used for this purpose because of their simplicity and the speed of simulations based on their use. The essential assumption in such models is that only contact interactions existing in the native state determine the energy surface of a polypeptide chain, even for non-native configurations sampled along folding trajectories. Here we use an all-atom molecular mechanics energy function to investigate the adequacy of Gō-type potentials. We show that, although the contact approximation is accurate, non-native contributions to the energy can be significant. The assumed relation between residue-residue interaction energies and the number of contacts between them is found to be only approximate. By contrast, individual residue energies correlate very well with the number of contacts. The results demonstrate that models based on the latter should give meaningful results (e.g., as used to interpret phi values), whereas those that depend on the former are only qualitative, at best.
Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI
ERIC Educational Resources Information Center
Forer, Barry; Zumbo, Bruno D.
2011-01-01
The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…
Ethical Implications of Validity-vs.-Reliability Trade-Offs in Educational Research
ERIC Educational Resources Information Center
Fendler, Lynn
2016-01-01
In educational research that calls itself empirical, the relationship between validity and reliability is that of trade-off: the stronger the bases for validity, the weaker the bases for reliability (and vice versa). Validity and reliability are widely regarded as basic criteria for evaluating research; however, there are ethical implications of…
Empirical confirmation of creative destruction from world trade data.
Klimek, Peter; Hausmann, Ricardo; Thurner, Stefan
2012-01-01
We show that world trade network datasets contain empirical evidence that the dynamics of innovation in the world economy indeed follows the concept of creative destruction, as proposed by J.A. Schumpeter more than half a century ago. National economies can be viewed as complex, evolving systems, driven by a stream of appearance and disappearance of goods and services. Products appear in bursts of creative cascades. We find that products systematically tend to co-appear, and that product appearances lead to massive disappearance events of existing products in the following years. The opposite-disappearances followed by periods of appearances-is not observed. This is an empirical validation of the dominance of cascading competitive replacement events on the scale of national economies, i.e., creative destruction. We find a tendency that more complex products drive out less complex ones, i.e., progress has a direction. Finally we show that the growth trajectory of a country's product output diversity can be understood by a recently proposed evolutionary model of Schumpeterian economic dynamics.
Empirical Confirmation of Creative Destruction from World Trade Data
Klimek, Peter; Hausmann, Ricardo; Thurner, Stefan
2012-01-01
We show that world trade network datasets contain empirical evidence that the dynamics of innovation in the world economy indeed follows the concept of creative destruction, as proposed by J.A. Schumpeter more than half a century ago. National economies can be viewed as complex, evolving systems, driven by a stream of appearance and disappearance of goods and services. Products appear in bursts of creative cascades. We find that products systematically tend to co-appear, and that product appearances lead to massive disappearance events of existing products in the following years. The opposite–disappearances followed by periods of appearances–is not observed. This is an empirical validation of the dominance of cascading competitive replacement events on the scale of national economies, i.e., creative destruction. We find a tendency that more complex products drive out less complex ones, i.e., progress has a direction. Finally we show that the growth trajectory of a country’s product output diversity can be understood by a recently proposed evolutionary model of Schumpeterian economic dynamics. PMID:22719989
Modelling vertical error in LiDAR-derived digital elevation models
NASA Astrophysics Data System (ADS)
Aguilar, Fernando J.; Mills, Jon P.; Delgado, Jorge; Aguilar, Manuel A.; Negreiros, J. G.; Pérez, José L.
2010-01-01
A hybrid theoretical-empirical model has been developed for modelling the error in LiDAR-derived digital elevation models (DEMs) of non-open terrain. The theoretical component seeks to model the propagation of the sample data error (SDE), i.e. the error from light detection and ranging (LiDAR) data capture of ground sampled points in open terrain, towards interpolated points. The interpolation methods used for infilling gaps may produce a non-negligible error that is referred to as gridding error. In this case, interpolation is performed using an inverse distance weighting (IDW) method with the local support of the five closest neighbours, although it would be possible to utilize other interpolation methods. The empirical component refers to what is known as "information loss". This is the error purely due to modelling the continuous terrain surface from only a discrete number of points plus the error arising from the interpolation process. The SDE must be previously calculated from a suitable number of check points located in open terrain and assumes that the LiDAR point density was sufficiently high to neglect the gridding error. For model calibration, data for 29 study sites, 200×200 m in size, belonging to different areas around Almeria province, south-east Spain, were acquired by means of stereo photogrammetric methods. The developed methodology was validated against two different LiDAR datasets. The first dataset used was an Ordnance Survey (OS) LiDAR survey carried out over a region of Bristol in the UK. The second dataset was an area located at Gador mountain range, south of Almería province, Spain. Both terrain slope and sampling density were incorporated in the empirical component through the calibration phase, resulting in a very good agreement between predicted and observed data (R2 = 0.9856 ; p < 0.001). In validation, Bristol observed vertical errors, corresponding to different LiDAR point densities, offered a reasonably good fit to the predicted errors. Even better results were achieved in the more rugged morphology of the Gador mountain range dataset. The findings presented in this article could be used as a guide for the selection of appropriate operational parameters (essentially point density in order to optimize survey cost), in projects related to LiDAR survey in non-open terrain, for instance those projects dealing with forestry applications.
Student mathematical imagination instruments: construction, cultural adaptation and validity
NASA Astrophysics Data System (ADS)
Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.
2018-03-01
Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.
The Role of Empirical Evidence for Transferring a New Technology to Industry
NASA Astrophysics Data System (ADS)
Baldassarre, Maria Teresa; Bruno, Giovanni; Caivano, Danilo; Visaggio, Giuseppe
Technology transfer and innovation diffusion are key success factors for an enterprise. The shift to a new software technology involves, on one hand, inevitable changes to ingrained and familiar processes and, on the other, requires training, changes in practices and commitment on behalf of technical staff and management. Nevertheless, industry is often reluctant to innovation due to the changes it determines. The process of innovation diffusion is easier if the new technology is supported by empirical evidence. In this sense our conjecture is that Empirical Software Engineering (ESE) serves as means for validating and transferring a new technology within production processes. In this paper, the authors report their experience of a method, Multiview Framework, defined in the SERLAB research laboratory as support for designing and managing a goal oriented measurement program that has been validated through various empirical studies before being transferred to an Italian SME. Our discussion points out the important role of empirical evidence for obtaining management commitment and buy-in on behalf of technical staff, and for making technological transfer possible.
Redefinition and global estimation of basal ecosystem respiration rate
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Luo, Yiqi; Li, Xianglan; Liu, Shuguang; Yu, Guirui; Zhou, Tao; Bahn, Michael; Black, Andy; Desai, Ankur R.; Cescatti, Alessandro; Marcolla, Barbara; Jacobs, Cor; Chen, Jiquan; Aurela, Mika; Bernhofer, Christian; Gielen, Bert; Bohrer, Gil; Cook, David R.; Dragoni, Danilo; Dunn, Allison L.; Gianelle, Damiano; Grünwald, Thomas; Ibrom, Andreas; Leclerc, Monique Y.; Lindroth, Anders; Liu, Heping; Marchesini, Luca Belelli; Montagnani, Leonardo; Pita, Gabriel; Rodeghiero, Mirco; Rodrigues, Abel; Starr, Gregory; Stoy, Paul C.
2011-12-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ˜3°S to ˜70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr -1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
Modeling of Kerena Emergency Condenser
NASA Astrophysics Data System (ADS)
Bryk, Rafał; Schmidt, Holger; Mull, Thomas; Wagner, Thomas; Ganzmann, Ingo; Herbst, Oliver
2017-12-01
KERENA is an innovative boiling water reactor concept equipped with several passive safety systems. For the experimental verification of performance of the systems and for codes validation, the Integral Test Stand Karlstein (INKA) was built in Karlstein, Germany. The emergency condenser (EC) system transfers heat from the reactor pressure vessel (RPV) to the core flooding pool in case of water level decrease in the RPV. EC is composed of a large number of slightly inclined tubes. During accident conditions, steam enters into the tubes and condenses due to the contact of the tubes with cold water at the secondary side. The condensed water flows then back to the RPV due to gravity. In this paper two approaches for modeling of condensation in slightly inclined tubes are compared and verified against experiments. The first approach is based on the flow regime map. Depending on the regime, heat transfer coefficient is calculated according to specific semi-empirical correlation. The second approach uses a general, fully-empirical correlation. The models are developed with utilization of the object-oriented Modelica language and the open-source OpenModelica environment. The results are compared with data obtained during a large scale integral test, simulating loss of coolant accident performed at Integral Test Stand Karlstein (INKA). The comparison shows a good agreement.Due to the modularity of models, both of them may be used in the future in systems incorporating condensation in horizontal or slightly inclined tubes. Depending on his preferences, the modeller may choose one-equation based approach or more sophisticated model composed of several exchangeable semi-empirical correlations.
Redefinition and global estimation of basal ecosystem respiration rate
Yuan, W.; Luo, Y.; Li, X.; Liu, S.; Yu, G.; Zhou, T.; Bahn, M.; Black, A.; Desai, A.R.; Cescatti, A.; Marcolla, B.; Jacobs, C.; Chen, J.; Aurela, M.; Bernhofer, C.; Gielen, B.; Bohrer, G.; Cook, D.R.; Dragoni, D.; Dunn, A.L.; Gianelle, D.; Grnwald, T.; Ibrom, A.; Leclerc, M.Y.; Lindroth, A.; Liu, H.; Marchesini, L.B.; Montagnani, L.; Pita, G.; Rodeghiero, M.; Rodrigues, A.; Starr, G.; Stoy, Paul C.
2011-01-01
Basal ecosystem respiration rate (BR), the ecosystem respiration rate at a given temperature, is a common and important parameter in empirical models for quantifying ecosystem respiration (ER) globally. Numerous studies have indicated that BR varies in space. However, many empirical ER models still use a global constant BR largely due to the lack of a functional description for BR. In this study, we redefined BR to be ecosystem respiration rate at the mean annual temperature. To test the validity of this concept, we conducted a synthesis analysis using 276 site-years of eddy covariance data, from 79 research sites located at latitudes ranging from ∼3°S to ∼70°N. Results showed that mean annual ER rate closely matches ER rate at mean annual temperature. Incorporation of site-specific BR into global ER model substantially improved simulated ER compared to an invariant BR at all sites. These results confirm that ER at the mean annual temperature can be considered as BR in empirical models. A strong correlation was found between the mean annual ER and mean annual gross primary production (GPP). Consequently, GPP, which is typically more accurately modeled, can be used to estimate BR. A light use efficiency GPP model (i.e., EC-LUE) was applied to estimate global GPP, BR and ER with input data from MERRA (Modern Era Retrospective-Analysis for Research and Applications) and MODIS (Moderate resolution Imaging Spectroradiometer). The global ER was 103 Pg C yr −1, with the highest respiration rate over tropical forests and the lowest value in dry and high-latitude areas.
Violent Crime in Post-Civil War Guatemala: Causes and Policy Implications
2015-03-01
on field research and case studies in Honduras, Bolivia, and Argentina. Bailey’s Security Trap theory is comprehensive in nature and derived from... research question. The second phase uses empirical data and comparative case studies to validate or challenge selected arguments that potentially...Contextual relevancy, historical inference, Tools: Empirics and case conclusions empirical data studies Figme2. Sample Research Methodology E
A quantitative test of population genetics using spatiogenetic patterns in bacterial colonies.
Korolev, Kirill S; Xavier, João B; Nelson, David R; Foster, Kevin R
2011-10-01
It is widely accepted that population-genetics theory is the cornerstone of evolutionary analyses. Empirical tests of the theory, however, are challenging because of the complex relationships between space, dispersal, and evolution. Critically, we lack quantitative validation of the spatial models of population genetics. Here we combine analytics, on- and off-lattice simulations, and experiments with bacteria to perform quantitative tests of the theory. We study two bacterial species, the gut microbe Escherichia coli and the opportunistic pathogen Pseudomonas aeruginosa, and show that spatiogenetic patterns in colony biofilms of both species are accurately described by an extension of the one-dimensional stepping-stone model. We use one empirical measure, genetic diversity at the colony periphery, to parameterize our models and show that we can then accurately predict another key variable: the degree of short-range cell migration along an edge. Moreover, the model allows us to estimate other key parameters, including effective population size (density) at the expansion frontier. While our experimental system is a simplification of natural microbial community, we argue that it constitutes proof of principle that the spatial models of population genetics can quantitatively capture organismal evolution.
Schaper, Louise K; Pervan, Graham P
2007-06-01
There is evidence to suggest that health professionals are reluctant to accept and utilise information and communication technologies (ICT) and concern is growing within health informatics research that this is contributing to the lag in adoption and utilisation of ICT across the health sector. Technology acceptance research within the field of information systems has been limited in its application to health and there is a concurrent need to develop and gain empirical support for models of technology acceptance within health and to examine acceptance and utilisation issues amongst health professionals to improve the success of information system implementation in this arena. This paper outlines a project that examines ICT acceptance and utilisation by Australian occupational therapists. It describes the theoretical basis behind the development of a research model and the methodology being employed to empirically validate the model using substantial quantitative, qualitative and longitudinal data. Preliminary results from Phase II of the study are presented. The theoretical significance of this work is that it uses a thoroughly constructed research model, with potentially the largest sample size ever tested, to extend technology acceptance research into the health sector.
Empirical Modeling of the Plasmasphere Dynamics Using Neural Networks
NASA Astrophysics Data System (ADS)
Zhelavskaya, I. S.; Shprits, Y.; Spasojevic, M.
2017-12-01
We present a new empirical model for reconstructing the global dynamics of the cold plasma density distribution based only on solar wind data and geomagnetic indices. Utilizing the density database obtained using the NURD (Neural-network-based Upper hybrid Resonance Determination) algorithm for the period of October 1, 2012 - July 1, 2016, in conjunction with solar wind data and geomagnetic indices, we develop a neural network model that is capable of globally reconstructing the dynamics of the cold plasma density distribution for 2 ≤ L ≤ 6 and all local times. We validate and test the model by measuring its performance on independent datasets withheld from the training set and by comparing the model predicted global evolution with global images of He+ distribution in the Earth's plasmasphere from the IMAGE Extreme UltraViolet (EUV) instrument. We identify the parameters that best quantify the plasmasphere dynamics by training and comparing multiple neural networks with different combinations of input parameters (geomagnetic indices, solar wind data, and different durations of their time history). We demonstrate results of both local and global plasma density reconstruction. This study illustrates how global dynamics can be reconstructed from local in-situ observations by using machine learning techniques.
Exploring experiential value in online mobile gaming adoption.
Okazaki, Shintaro
2008-10-01
Despite the growing importance of the online mobile gaming industry, little research has been undertaken to explain why consumers engage in this ubiquitous entertainment. This study attempts to develop an instrument to measure experiential value in online mobile gaming adoption. The proposed scale consists of seven first-order factors of experiential value: intrinsic enjoyment, escapism, efficiency, economic value, visual appeal, perceived novelty, and perceived risklessness. The survey obtained 164 usable responses from Japanese college students. The empirical data fit our first-order model well, indicating a high level of reliability as well as convergent and discriminant validity. The single second-order model also shows an acceptable model fit.
Design of a Neurally Plausible Model of Fear Learning
Krasne, Franklin B.; Fanselow, Michael S.; Zelikowsky, Moriel
2011-01-01
A neurally oriented conceptual and computational model of fear conditioning manifested by freezing behavior (FRAT), which accounts for many aspects of delay and context conditioning, has been constructed. Conditioning and extinction are the result of neuromodulation-controlled LTP at synapses of thalamic, cortical, and hippocampal afferents on principal cells and inhibitory interneurons of lateral and basal amygdala. The phenomena accounted for by the model (and simulated by the computational version) include conditioning, secondary reinforcement, blocking, the immediate shock deficit, extinction, renewal, and a range of empirically valid effects of pre- and post-training ablation or inactivation of hippocampus or amygdala nuclei. PMID:21845175
Invariance in the recurrence of large returns and the validation of models of price dynamics
NASA Astrophysics Data System (ADS)
Chang, Lo-Bin; Geman, Stuart; Hsieh, Fushing; Hwang, Chii-Ruey
2013-08-01
Starting from a robust, nonparametric definition of large returns (“excursions”), we study the statistics of their occurrences, focusing on the recurrence process. The empirical waiting-time distribution between excursions is remarkably invariant to year, stock, and scale (return interval). This invariance is related to self-similarity of the marginal distributions of returns, but the excursion waiting-time distribution is a function of the entire return process and not just its univariate probabilities. Generalized autoregressive conditional heteroskedasticity (GARCH) models, market-time transformations based on volume or trades, and generalized (Lévy) random-walk models all fail to fit the statistical structure of excursions.
Moultrie, Josefine K; Engel, Rolf R
2017-10-01
We identified empirical correlates for the 42 substantive scales of the German language version of the Minnesota Multiphasic Personality Inventory (MMPI)-2-Restructured Form (MMPI-2-RF): Higher Order, Restructured Clinical, Specific Problem, Interest, and revised Personality Psychopathology Five scales. We collected external validity data by means of a 177-item chart review form in a sample of 488 psychiatric inpatients of a German university hospital. We structured our findings along the interpretational guidelines for the MMPI-2-RF and compared them with the validity data published in the tables of the MMPI-2-RF Technical Manual. Our results show significant correlations between MMPI-2-RF scales and conceptually relevant criteria. Most of the results were in line with U.S. validation studies. Some of the differences could be attributed to sample compositions. For most of the scales, construct validity coefficients were acceptable. Taken together, this study amplifies the enlarging body of research on empirical correlates of the MMPI-2-RF scales in a new sample. The study suggests that the interpretations given in the MMPI-2-RF manual may be generalizable to the German language MMPI-2-RF. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...
2017-04-01
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bacelli, Giorgio; Coe, Ryan; Patterson, David
Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.
Life Prediction of Large Lithium-Ion Battery Packs with Active and Passive Balancing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler A; Zane, Regan
Lithium-ion battery packs take a major part of large-scale stationary energy storage systems. One challenge in reducing battery pack cost is to reduce pack size without compromising pack service performance and lifespan. Prognostic life model can be a powerful tool to handle the state of health (SOH) estimate and enable active life balancing strategy to reduce cell imbalance and extend pack life. This work proposed a life model using both empirical and physical-based approaches. The life model described the compounding effect of different degradations on the entire cell with an empirical model. Then its lower-level submodels considered the complex physicalmore » links between testing statistics (state of charge level, C-rate level, duty cycles, etc.) and the degradation reaction rates with respect to specific aging mechanisms. The hybrid approach made the life model generic, robust and stable regardless of battery chemistry and application usage. The model was validated with a custom pack with both passive and active balancing systems implemented, which created four different aging paths in the pack. The life model successfully captured the aging trajectories of all four paths. The life model prediction errors on capacity fade and resistance growth were within +/-3% and +/-5% of the experiment measurements.« less
Pontes, Halley M.; Király, Orsolya; Demetrovics, Zsolt; Griffiths, Mark D.
2014-01-01
Background Over the last decade, there has been growing concern about ‘gaming addiction’ and its widely documented detrimental impacts on a minority of individuals that play excessively. The latest (fifth) edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5) included nine criteria for the potential diagnosis of Internet Gaming Disorder (IGD) and noted that it was a condition that warranted further empirical study. Aim: The main aim of this study was to develop a valid and reliable standardised psychometrically robust tool in addition to providing empirically supported cut-off points. Methods A sample of 1003 gamers (85.2% males; mean age 26 years) from 57 different countries were recruited via online gaming forums. Validity was assessed by confirmatory factor analysis (CFA), criterion-related validity, and concurrent validity. Latent profile analysis was also carried to distinguish disordered gamers from non-disordered gamers. Sensitivity and specificity analyses were performed to determine an empirical cut-off for the test. Results The CFA confirmed the viability of IGD-20 Test with a six-factor structure (salience, mood modification, tolerance, withdrawal, conflict and relapse) for the assessment of IGD according to the nine criteria from DSM-5. The IGD-20 Test proved to be valid and reliable. According to the latent profile analysis, 5.3% of the total participants were classed as disordered gamers. Additionally, an optimal empirical cut-off of 71 points (out of 100) seemed to be adequate according to the sensitivity and specificity analyses carried. Conclusions The present findings support the viability of the IGD-20 Test as an adequate standardised psychometrically robust tool for assessing internet gaming disorder. Consequently, the new instrument represents the first step towards unification and consensus in the field of gaming studies. PMID:25313515
Pontes, Halley M; Király, Orsolya; Demetrovics, Zsolt; Griffiths, Mark D
2014-01-01
Over the last decade, there has been growing concern about 'gaming addiction' and its widely documented detrimental impacts on a minority of individuals that play excessively. The latest (fifth) edition of the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-5) included nine criteria for the potential diagnosis of Internet Gaming Disorder (IGD) and noted that it was a condition that warranted further empirical study. The main aim of this study was to develop a valid and reliable standardised psychometrically robust tool in addition to providing empirically supported cut-off points. A sample of 1003 gamers (85.2% males; mean age 26 years) from 57 different countries were recruited via online gaming forums. Validity was assessed by confirmatory factor analysis (CFA), criterion-related validity, and concurrent validity. Latent profile analysis was also carried to distinguish disordered gamers from non-disordered gamers. Sensitivity and specificity analyses were performed to determine an empirical cut-off for the test. The CFA confirmed the viability of IGD-20 Test with a six-factor structure (salience, mood modification, tolerance, withdrawal, conflict and relapse) for the assessment of IGD according to the nine criteria from DSM-5. The IGD-20 Test proved to be valid and reliable. According to the latent profile analysis, 5.3% of the total participants were classed as disordered gamers. Additionally, an optimal empirical cut-off of 71 points (out of 100) seemed to be adequate according to the sensitivity and specificity analyses carried. The present findings support the viability of the IGD-20 Test as an adequate standardised psychometrically robust tool for assessing internet gaming disorder. Consequently, the new instrument represents the first step towards unification and consensus in the field of gaming studies.
NASA Technical Reports Server (NTRS)
Freilich, Michael H.; Dunbar, R. Scott
1993-01-01
Calculation of accurate vector winds from scatterometers requires knowledge of the relationship between backscatter cross-section and the geophysical variable of interest. As the detailed dynamics of wind generation of centimetric waves and radar-sea surface scattering at moderate incidence angles are not well known, empirical scatterometer model functions relating backscatter to winds must be developed. Less well appreciated is the fact that, given an accurate model function and some knowledge of the dominant scattering mechanisms, significant information on the amplitudes and directional distributions of centimetric roughness elements on the sea surface can be inferred. accurate scatterometer model functions can thus be used to investigate wind generation of short waves under realistic conditions. The present investigation involves developing an empirical model function for the C-band (5.3 GHz) ERS-1 scatterometer and comparing Ku-band model functions with the C-band model to infer information on the two-dimensional spectrum of centimetric roughness elements in the ocean. The C-band model function development is based on collocations of global backscatter measurements with operational surface analyses produced by meteorological agencies. Strengths and limitations of the method are discussed, and the resulting model function is validated in part through comparison with the actual distributions of backscatter cross-section triplets. Details of the directional modulation as well as the wind speed sensitivity at C-band are investigated. Analysis of persistent outliers in the data is used to infer the magnitudes of non-wind effects (such as atmospheric stratification, swell, etc.). The ERS-1 C-band instrument and the Seasat Ku-band (14.6 GHz) scatterometer both imaged waves of approximately 3.4 cm wavelength assuming that Bragg scattering is the dominant mechanism. Comparisons of the C-band and Ku-band model functions are used both to test the validity of the postulated Bragg mechanism and to investigate the directional distribution of the imaged waves under a variety of conditions where Bragg scatter is dominant.
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
Garner, Joseph P.
2014-01-01
The vast majority of drugs entering human trials fail. This problem (called “attrition”) is widely recognized as a public health crisis, and has been discussed openly for the last two decades. Multiple recent reviews argue that animals may be just too different physiologically, anatomically, and psychologically from humans to be able to predict human outcomes, essentially questioning the justification of basic biomedical research in animals. This review argues instead that the philosophy and practice of experimental design and analysis is so different in basic animal work and human clinical trials that an animal experiment (as currently conducted) cannot reasonably predict the outcome of a human trial. Thus, attrition does reflect a lack of predictive validity of animal experiments, but it would be a tragic mistake to conclude that animal models cannot show predictive validity. A variety of contributing factors to poor validity are reviewed. The need to adopt methods and models that are highly specific (i.e., which can identify true negative results) in order to complement the current preponderance of highly sensitive methods (which are prone to false positive results) is emphasized. Concepts in biomarker-based medicine are offered as a potential solution, and changes in the use of animal models required to embrace a translational biomarker-based approach are outlined. In essence, this review advocates a fundamental shift, where we treat every aspect of an animal experiment that we can as if it was a clinical trial in a human population. However, it is unrealistic to expect researchers to adopt a new methodology that cannot be empirically justified until a successful human trial. “Validation with known failures” is proposed as a solution. Thus new methods or models can be compared against existing ones using a drug that has translated (a known positive) and one that has failed (a known negative). Current methods should incorrectly identify both as effective, but a more specific method should identify the negative compound correctly. By using a library of known failures we can thereby empirically test the impact of suggested solutions such as enrichment, controlled heterogenization, biomarker-based models, or reverse-translated measures. PMID:25541546
Performance model for grid-connected photovoltaic inverters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyson, William Earl; Galbraith, Gary M.; King, David L.
2007-09-01
This document provides an empirically based performance model for grid-connected photovoltaic inverters used for system performance (energy) modeling and for continuous monitoring of inverter performance during system operation. The versatility and accuracy of the model were validated for a variety of both residential and commercial size inverters. Default parameters for the model can be obtained from manufacturers specification sheets, and the accuracy of the model can be further refined using measurements from either well-instrumented field measurements in operational systems or using detailed measurements from a recognized testing laboratory. An initial database of inverter performance parameters was developed based on measurementsmore » conducted at Sandia National Laboratories and at laboratories supporting the solar programs of the California Energy Commission.« less
Empirical Validation of Reading Proficiency Guidelines
ERIC Educational Resources Information Center
Clifford, Ray; Cox, Troy L.
2013-01-01
The validation of ability scales describing multidimensional skills is always challenging, but not impossible. This study applies a multistage, criterion-referenced approach that uses a framework of aligned texts and reading tasks to explore the validity of the ACTFL and related reading proficiency guidelines. Rasch measurement and statistical…
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
True Density Prediction of Garlic Slices Dehydrated by Convection.
López-Ortiz, Anabel; Rodríguez-Ramírez, Juan; Méndez-Lagunas, Lilia
2016-01-01
Physiochemical parameters with constant values are employed for the mass-heat transfer modeling of the air drying process. However, structural properties are not constant under drying conditions. Empirical, semi-theoretical, and theoretical models have been proposed to describe true density (ρp). These models only consider the ideal behavior and assume a linear relationship between ρp and moisture content (X); nevertheless, some materials exhibit a nonlinear behavior of ρp as a function of X with a tendency toward being concave-down. This comportment, which can be observed in garlic and carrots, has been difficult to model mathematically. This work proposes a semi-theoretical model for predicting ρp values, taking into account the concave-down comportment that occurs at the end of the drying process. The model includes the ρs dependency on external conditions (air drying temperature (Ta)), the inside temperature of the garlic slices (Ti ), and the moisture content (X) obtained from experimental data on the drying process. Calculations show that the dry solid density (ρs ) is not a linear function of Ta, X, and Ti . An empirical correlation for ρs is proposed as a function of Ti and X. The adjustment equation for Ti is proposed as a function of Ta and X. The proposed model for ρp was validated using experimental data on the sliced garlic and was compared with theoretical and empirical models that are available in the scientific literature. Deviation between the experimental and predicted data was determined. An explanation of the nonlinear behavior of ρs and ρp in the function of X, taking into account second-order phase changes, are then presented. © 2015 Institute of Food Technologists®
Organosolv delignification of Eucalyptus globulus: Kinetic study of autocatalyzed ethanol pulping
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliet, M.; Rodriguez, F.; Santos, A.
2000-01-01
The autocatalyzed delignification of Eucalyptus globulus in 50% ethanol (w/w) was modeled as the irreversible and consecutive dissolution of initial, bulk, and residual lignin. Their respective contributions to total lignin was estimated as 9, 75, and 16%. Isothermal pulping experiments were carried out to evaluate an empirical kinetic model among eight proposals corresponding to different reaction schemes. The calculated activation energy was found to be 96.5, 98.5, and 40.8 kJ/mol for initial, bulk, and residual delignification, respectively. The influence of hydrogen ion concentration was expressed by a power-law function model. The kinetic model developed here was validated using data frommore » nonisothermal pulping runs.« less
Time-series Oxygen-18 Precipitation Isoscapes for Canada and the Northern United States
NASA Astrophysics Data System (ADS)
Delavau, Carly J.; Chun, Kwok P.; Stadnyk, Tricia A.; Birks, S. Jean; Welker, Jeffrey M.
2014-05-01
The present and past hydrological cycle from the watershed to regional scale can be greatly enhanced using water isotopes (δ18O and δ2H), displayed today as isoscapes. The development of water isoscapes has both hydrological and ecological applications, such as ground water recharge and food web ecology, and can provide critical information when observations are not available due to spatial and temporal gaps in sampling and data networks. This study focuses on the creation of δ18O precipitation (δ18Oppt) isoscapes at a monthly temporal frequency across Canada and the northern United States (US) utilizing CNIP (Canadian Network for Isotopes in Precipitation) and USNIP (United States Network for Isotopes in Precipitation) measurements. Multiple linear stepwise regressions of CNIP and USNIP observations alongside NARR (North American Regional Reanalysis) climatological variables, teleconnection indices, and geographic indicators are utilized to create empirical models that predict the δ18O of monthly precipitation across Canada and the northern US. Pooling information from nearby locations within a region can be useful due to the similarity of processes and mechanisms controlling the variability of δ18O. We expect similarity in the controls on isotopic composition to strengthen the correlation between δ18Oppt and predictor variables, resulting in model simulation improvements. For this reason, three different regionalization approaches are used to separate the study domain into 'isotope zones' to explore the effect of regionalization on model performance. This methodology results in 15 empirical models, five within each regionalization. A split sample calibration and validation approach is employed for model development, and parameter selection is based on demonstrated improvement of the Akaike Information Criteria (AIC). Simulation results indicate the empirical models are generally able to capture the overall monthly variability in δ18Oppt. For the three regionalizations, average adjusted-R2 and RMSE (weighted to number of observations within each isotope zone) range from 0.70 - 0.72 and 2.76 - 2.91, respectively, indicating that on average the different spatial groupings perform comparably. Validation weighted R2and RMSE show a larger spread between models and poorer performance, ranging from 0.45 - 0.59 and 3.28 - 3.39, respectively. Additional evaluation of simulated δ18Oppt at each station and inter/intra-annually is conducted to evaluate model performance over various space and time scales. Stepwise regression derived parameterizations indicate the significance of precipitable water content and latitude as predictor variables for all regionalizations. Long-term (1981-2010) annual average δ18Oppt isoscapes are produced for Canada and the northern US, highlighting the differences between regionalization approaches. 95% confidence interval maps are generated to provide an estimate of the uncertainty associated with long-term δ18Oppt simulations. This is the first ever time-series empirical modelling of δ18Oppt for Canada utilizing CNIP data, as well as the first modelling collaboration between the CNIP and USNIP networks. This study is the initial step towards empirically derived time-series δ18Oppt for use in iso-hydrological modelling studies. Methods and results from this research are equally applicable to ecology and forensics as the simulated δ18Oppt isoscapes provide the primary oxygen source for many plants and foodwebs at refined temporal and spatial scales across Canada and the northern US.
Stakeholder validation of a model of readiness for transition to adult care.
Schwartz, Lisa A; Brumley, Lauren D; Tuchman, Lisa K; Barakat, Lamia P; Hobbie, Wendy L; Ginsberg, Jill P; Daniel, Lauren C; Kazak, Anne E; Bevans, Katherine; Deatrick, Janet A
2013-10-01
That too few youth with special health care needs make the transition to adult-oriented health care successfully may be due, in part, to lack of readiness to transfer care. There is a lack of theoretical models to guide development and implementation of evidence-based guidelines, assessments, and interventions to improve transition readiness. To further validate the Social-ecological Model of Adolescent and Young Adult Readiness to Transition (SMART) via feedback from stakeholders (patients, parents, and providers) from a medically diverse population in need of life-long follow-up care, survivors of childhood cancer. Mixed-methods participatory research design. A large Mid-Atlantic children's hospital. Adolescent and young adult survivors of childhood cancer (n = 14), parents (n = 18), and pediatric providers (n = 10). Patients and parents participated in focus groups; providers participated in individual semi-structured interviews. Validity of SMART was assessed 3 ways: (1) ratings on importance of SMART components for transition readiness using a 5-point scale (0-4; ratings >2 support validity), (2) nominations of 3 "most important" components, and (3) directed content analysis of focus group/interview transcripts. Qualitative data supported the validity of SMART, with minor modifications to definitions of components. Quantitative ratings met criteria for validity; stakeholders endorsed all components of SMART as important for transition. No additional SMART variables were suggested by stakeholders and the "most important" components varied by stakeholders, thus supporting the comprehensiveness of SMART and need to involve multiple perspectives. SMART represents a comprehensive and empirically validated framework for transition research and program planning, supported by survivors of childhood cancer, parents, and pediatric providers. Future research should validate SMART among other populations with special health care needs.
Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R.; McDaniel, Reuben R.
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them. PMID:24349771
Anderson, Ruth A; Plowman, Donde; Corazzini, Kirsten; Hsieh, Pi-Ching; Su, Hui Fang; Landerman, Lawrence R; McDaniel, Reuben R
2013-01-01
Objectives. To (1) describe participation in decision-making as a systems-level property of complex adaptive systems and (2) present empirical evidence of reliability and validity of a corresponding measure. Method. Study 1 was a mail survey of a single respondent (administrators or directors of nursing) in each of 197 nursing homes. Study 2 was a field study using random, proportionally stratified sampling procedure that included 195 organizations with 3,968 respondents. Analysis. In Study 1, we analyzed the data to reduce the number of scale items and establish initial reliability and validity. In Study 2, we strengthened the psychometric test using a large sample. Results. Results demonstrated validity and reliability of the participation in decision-making instrument (PDMI) while measuring participation of workers in two distinct job categories (RNs and CNAs). We established reliability at the organizational level aggregated items scores. We established validity of the multidimensional properties using convergent and discriminant validity and confirmatory factor analysis. Conclusions. Participation in decision making, when modeled as a systems-level property of organization, has multiple dimensions and is more complex than is being traditionally measured. Managers can use this model to form decision teams that maximize the depth and breadth of expertise needed and to foster connection among them.
Thermographic imaging of the space shuttle during re-entry using a near-infrared sensor
NASA Astrophysics Data System (ADS)
Zalameda, Joseph N.; Horvath, Thomas J.; Kerns, Robbie V.; Burke, Eric R.; Taylor, Jeff C.; Spisz, Tom; Gibson, David M.; Shea, Edward J.; Mercer, C. David; Schwartz, Richard J.; Tack, Steve; Bush, Brett C.; Dantowitz, Ronald F.; Kozubal, Marek J.
2012-06-01
High resolution calibrated near infrared (NIR) imagery of the Space Shuttle Orbiter was obtained during hypervelocity atmospheric re-entry of the STS-119, STS-125, STS-128, STS-131, STS-132, STS-133, and STS-134 missions. This data has provided information on the distribution of surface temperature and the state of the airflow over the windward surface of the Orbiter during descent. The thermal imagery complemented data collected with onboard surface thermocouple instrumentation. The spatially resolved global thermal measurements made during the Orbiter's hypersonic re-entry will provide critical flight data for reducing the uncertainty associated with present day ground-to-flight extrapolation techniques and current state-of-the-art empirical boundary-layer transition or turbulent heating prediction methods. Laminar and turbulent flight data is critical for the validation of physics-based, semi-empirical boundary-layer transition prediction methods as well as stimulating the validation of laminar numerical chemistry models and the development of turbulence models supporting NASA's next-generation spacecraft. In this paper we provide details of the NIR imaging system used on both air and land-based imaging assets. The paper will discuss calibrations performed on the NIR imaging systems that permitted conversion of captured radiant intensity (counts) to temperature values. Image processing techniques are presented to analyze the NIR data for vignetting distortion, best resolution, and image sharpness.
Construct Validation of Wenger's Support Network Typology.
Szabo, Agnes; Stephens, Christine; Allen, Joanne; Alpass, Fiona
2016-10-07
The study aimed to validate Wenger's empirically derived support network typology of responses to the Practitioner Assessment of Network Type (PANT) in an older New Zealander population. The configuration of network types was tested across ethnic groups and in the total sample. Data (N = 872, Mage = 67 years, SDage = 1.56 years) from the 2006 wave of the New Zealand Health, Work and Retirement study were analyzed using latent profile analysis. In addition, demographic differences among the emerging profiles were tested. Competing models were evaluated based on a range of fit criteria, which supported a five-profile solution. The "locally integrated," "community-focused," "local self-contained," "private-restricted," and "friend- and family-dependent" network types were identified as latent profiles underlying the data. There were no differences between Māori and non-Māori in final profile configurations. However, Māori were more likely to report integrated network types. Findings confirm the validity of Wenger's network types. However, the level to which participants endorse accessibility of family, frequency of interactions, and community engagement can be influenced by sample and contextual characteristics. Future research using the PANT items should empirically verify and derive the social support network types, rather than use a predefined scoring system. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Development and validation of a music performance anxiety inventory for gifted adolescent musicians.
Osborne, Margaret S; Kenny, Dianna T
2005-01-01
Music performance anxiety (MPA) is a distressing experience for musicians of all ages, yet the empirical investigation of MPA in adolescents has received little attention to date. No measures specifically targeting MPA in adolescents have been empirically validated. This article presents findings of an initial study into the psychometric properties and validation of the Music Performance Anxiety Inventory for Adolescents (MPAI-A), a new self-report measure of MPA for this group. Data from 381 elite young musicians aged 12-19 years was used to investigate the factor structure, internal reliability, construct and divergent validity of the MPAI-A. Cronbach's alpha for the full measure was .91. Factor analysis identified three factors, which together accounted for 53% of the variance. Construct validity was demonstrated by significant positive relationships with social phobia (measured using the Social Phobia Anxiety Inventory [Beidel, D. C., Turner, S. M., & Morris, T. L. (1995). A new inventory to assess childhood social anxiety and phobia: The Social Phobia and Anxiety Inventory for Children. Psychological Assessment, 7(1), 73-79; Beidel, D. C., Turner, S. M., & Morris, T. L. (1998). Social Phobia and Anxiety Inventory for Children (SPAI-C). North Tonawanda, NY: Multi-Health Systems Inc.]) and trait anxiety (measured using the State Trait Anxiety Inventory [Spielberger, C. D. (1983). State-Trait Anxiety Inventory STAI (Form Y). Palo Alto, CA: Consulting Psychologists Press, Inc.]). The MPAI-A demonstrated convergent validity by a moderate to strong positive correlation with an adult measure of MPA. Discriminant validity was established by a weaker positive relationship with depression, and no relationship with externalizing behavior problems. It is hoped that the MPAI-A, as the first empirically validated measure of adolescent musicians' performance anxiety, will enhance and promote phenomenological and treatment research in this area.
NASA Astrophysics Data System (ADS)
De Crop, Wannes; Ryken, Nick; Tomma Okuonzia, Judith; Van Ranst, Eric; Baert, Geert; Boeckx, Pascal; Verschuren, Dirk; Verdoodt, Ann
2017-04-01
Population pressure results in conversion of natural vegetation to cropland within the western Ugandan crater lake watersheds. These watersheds however are particularly prone to soil degradation and erosion because of the high rainfall intensity and steep topography. Increased soil erosion losses expose the aquatic ecosystems to excessive nutrient loading. In this study, the Katinda crater lake watershed, which is already heavily impacted by agricultural land use, was selected for an explorative study on its (top)soil characteristics - given the general lack of data on soils within these watersheds - as well as an assessment of soil erosion risks. Using group discussions and structured interviews, the local land users' perceptions on land use, soil quality, soil erosion and lake ecology were compiled. Datasets on rainfall, topsoil characteristics, slope gradient and length, and land use were collected. Subsequently a RUSLE erosion model was run. Results from this empirical erosion modeling approach were validated against soil erosion estimates based on 137Cs measurements.
Aircraft High-Lift Aerodynamic Analysis Using a Surface-Vorticity Solver
NASA Technical Reports Server (NTRS)
Olson, Erik D.; Albertson, Cindy W.
2016-01-01
This study extends an existing semi-empirical approach to high-lift analysis by examining its effectiveness for use with a three-dimensional aerodynamic analysis method. The aircraft high-lift geometry is modeled in Vehicle Sketch Pad (OpenVSP) using a newly-developed set of techniques for building a three-dimensional model of the high-lift geometry, and for controlling flap deflections using scripted parameter linking. Analysis of the low-speed aerodynamics is performed in FlightStream, a novel surface-vorticity solver that is expected to be substantially more robust and stable compared to pressure-based potential-flow solvers and less sensitive to surface perturbations. The calculated lift curve and drag polar are modified by an empirical lift-effectiveness factor that takes into account the effects of viscosity that are not captured in the potential-flow solution. Analysis results are validated against wind-tunnel data for The Energy-Efficient Transport AR12 low-speed wind-tunnel model, a 12-foot, full-span aircraft configuration with a supercritical wing, full-span slats, and part-span double-slotted flaps.
Wang, Han-I; Aas, Eline; Howell, Debra; Roman, Eve; Patmore, Russell; Jack, Andrew; Smith, Alexandra
2014-03-01
Acute myeloid leukemia (AML) can be diagnosed at any age and treatment, which can be given with supportive and/or curative intent, is considered expensive compared with that for other cancers. Despite this, no long-term predictive models have been developed for AML, mainly because of the complexities associated with this disease. The objective of the current study was to develop a model (based on a UK cohort) to predict cost and life expectancy at a population level. The model developed in this study combined a decision tree with several Markov models to reflect the complexity of the prognostic factors and treatments of AML. The model was simulated with a cycle length of 1 month for a time period of 5 years and further simulated until age 100 years or death. Results were compared for two age groups and five different initial treatment intents and responses. Transition probabilities, life expectancies, and costs were derived from a UK population-based specialist registry-the Haematological Malignancy Research Network (www.hmrn.org). Overall, expected 5-year medical costs and life expectancy ranged from £8,170 to £81,636 and 3.03 to 34.74 months, respectively. The economic and health outcomes varied with initial treatment intent, age at diagnosis, trial participation, and study time horizon. The model was validated by using face, internal, and external validation methods. The results show that the model captured more than 90% of the empirical costs, and it demonstrated good fit with the empirical overall survival. Costs and life expectancy of AML varied with patient characteristics and initial treatment intent. The robust AML model developed in this study could be used to evaluate new diagnostic tools/treatments, as well as enable policy makers to make informed decisions. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Empirical forecast of the quiet time Ionosphere over Europe: a comparative model investigation
NASA Astrophysics Data System (ADS)
Badeke, R.; Borries, C.; Hoque, M. M.; Minkwitz, D.
2016-12-01
The purpose of this work is to find the best empirical model for a reliable 24 hour forecast of the ionospheric Total Electron Content (TEC) over Europe under geomagnetically quiet conditions. It will be used as an improved reference for the description of storm-induced perturbations in the ionosphere. The observational TEC-data were obtained from the International GNSS Service (IGS). Four different forecast model approaches were validated with observational IGS TEC-data: a 27 day median model (27d), a Fourier Analysis (FA) approach, the Neustrelitz TEC global model (NTCM-GL) and NeQuick 2. Two years were investigated depending on the solar activity: 2015 (high activity) and 2008 (low avtivity) The time periods of magnetic storms, which were identified with the Dst index, were excluded from the validation. For both years the two models 27d and FA show better results than NTCM-GL and NeQuick 2. For example for the year 2015 and 15° E / 50° N the difference between the IGS data and the predicted 27d model shows a mean value of 0.413 TEC units (TECU), a standard deviation of 3.307 TECU and a correlation coefficient of 0.921, while NTCM-GL and NeQuick 2 have mean differences of around 2-3 TECU, standard deviations of 4.5-5 TECU and correlation coefficients below 0.85. Since 27d and FA predictions strongly depend on observational data, the results confirm that data driven forecasts perform better than the climatological models NTCM-GL and NeQuick 2. However, the benefits of NTCM-GL and NeQuick 2 are actually the lower data dependency, i.e. they do not lack on precision when observational IGS TEC data are unavailable. Hence a combination of the different models is recommended reacting accordingly to the different data availabilities.
Validation of the Mindful Coping Scale
ERIC Educational Resources Information Center
Tharaldsen, Kjersti B.; Bru, Edvin
2011-01-01
The aim of this research is to develop and validate a self-report measure of mindfulness and coping, the mindful coping scale (MCS). Dimensions of mindful coping were theoretically deduced from mindfulness theory and coping theory. The MCS was empirically evaluated by use of factor analyses, reliability testing and nomological network validation.…
Initial Development and Validation of the Global Citizenship Scale
ERIC Educational Resources Information Center
Morais, Duarte B.; Ogden, Anthony C.
2011-01-01
The purpose of this article is to report on the initial development of a theoretically grounded and empirically validated scale to measure global citizenship. The methodology employed is multi-faceted, including two expert face validity trials, extensive exploratory and confirmatory factor analyses with multiple datasets, and a series of three…
An Empirical Examination of Validity in Evaluation
ERIC Educational Resources Information Center
Peck, Laura R.; Kim, Yushim; Lucio, Joanna
2012-01-01
This study addresses validity issues in evaluation that stem from Ernest R. House's book, "Evaluating With Validity". The authors examine "American Journal of Evaluation" articles from 1980 to 2010 that report the results of policy and program evaluations. The authors classify these evaluations according to House's "major approaches" typology…
Empirical Validation and Application of the Computing Attitudes Survey
ERIC Educational Resources Information Center
Dorn, Brian; Elliott Tew, Allison
2015-01-01
Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…
Porto, Paolo; Walling, Des E
2012-10-01
Information on rates of soil loss from agricultural land is a key requirement for assessing both on-site soil degradation and potential off-site sediment problems. Many models and prediction procedures have been developed to estimate rates of soil loss and soil redistribution as a function of the local topography, hydrometeorology, soil type and land management, but empirical data remain essential for validating and calibrating such models and prediction procedures. Direct measurements using erosion plots are, however, costly and the results obtained relate to a small enclosed area, which may not be representative of the wider landscape. In recent years, the use of fallout radionuclides and more particularly caesium-137 ((137)Cs) and excess lead-210 ((210)Pb(ex)) has been shown to provide a very effective means of documenting rates of soil loss and soil and sediment redistribution in the landscape. Several of the assumptions associated with the theoretical conversion models used with such measurements remain essentially unvalidated. This contribution describes the results of a measurement programme involving five experimental plots located in southern Italy, aimed at validating several of the basic assumptions commonly associated with the use of mass balance models for estimating rates of soil redistribution on cultivated land from (137)Cs and (210)Pb(ex) measurements. Overall, the results confirm the general validity of these assumptions and the importance of taking account of the fate of fresh fallout. However, further work is required to validate the conversion models employed in using fallout radionuclide measurements to document soil redistribution in the landscape and this could usefully direct attention to different environments and to the validation of the final estimates of soil redistribution rate as well as the assumptions of the models employed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Gupta, Punkaj; Rettiganti, Mallikarjuna; Gossett, Jeffrey M; Daufeldt, Jennifer; Rice, Tom B; Wetzel, Randall C
2018-01-01
To create a novel tool to predict favorable neurologic outcomes during ICU stay among children with critical illness. Logistic regression models using adaptive lasso methodology were used to identify independent factors associated with favorable neurologic outcomes. A mixed effects logistic regression model was used to create the final prediction model including all predictors selected from the lasso model. Model validation was performed using a 10-fold internal cross-validation approach. Virtual Pediatric Systems (VPS, LLC, Los Angeles, CA) database. Patients less than 18 years old admitted to one of the participating ICUs in the Virtual Pediatric Systems database were included (2009-2015). None. A total of 160,570 patients from 90 hospitals qualified for inclusion. Of these, 1,675 patients (1.04%) were associated with a decline in Pediatric Cerebral Performance Category scale by at least 2 between ICU admission and ICU discharge (unfavorable neurologic outcome). The independent factors associated with unfavorable neurologic outcome included higher weight at ICU admission, higher Pediatric Index of Morality-2 score at ICU admission, cardiac arrest, stroke, seizures, head/nonhead trauma, use of conventional mechanical ventilation and high-frequency oscillatory ventilation, prolonged hospital length of ICU stay, and prolonged use of mechanical ventilation. The presence of chromosomal anomaly, cardiac surgery, and utilization of nitric oxide were associated with favorable neurologic outcome. The final online prediction tool can be accessed at https://soipredictiontool.shinyapps.io/GNOScore/. Our model predicted 139,688 patients with favorable neurologic outcomes in an internal validation sample when the observed number of patients with favorable neurologic outcomes was among 139,591 patients. The area under the receiver operating curve for the validation model was 0.90. This proposed prediction tool encompasses 20 risk factors into one probability to predict favorable neurologic outcome during ICU stay among children with critical illness. Future studies should seek external validation and improved discrimination of this prediction tool.
Vartanian, L R
2000-01-01
Adolescents are thought to believe that others are always watching and evaluating them, and that they are special and unique, labeled the imaginary audience and the personal fable, respectively. These two constructs have been fixtures in textbooks on adolescent development, and have been offered as explanations for self-consciousness and risk-taking. However, their characterization of adolescent social cognition as biased has not been supported empirically, the measures used to assess them lack construct validity, and alternative explanations for both ideation patterns have not been explored. Despite these issues, the imaginary audience and personal fable constructs continue to be considered prototypical representations of social cognitive processes during adolescence. This paper (1) reviews theoretical models of the imaginary audience and the personal fable, and the empirical data pertaining to each model, (2) highlights problems surrounding the two most commonly used measures, and (3) outlines directions for future research, so that a better understanding of the imaginary audience and personal fable, and their roles in adolescent development, may be achieved.
ERIC Educational Resources Information Center
Schreibman, Laura; Dawson, Geraldine; Stahmer, Aubyn C.; Landa, Rebecca; Rogers, Sally J.; McGee, Gail G.; Kasari, Connie; Ingersoll, Brooke; Kaiser, Ann P.; Bruinsma, Yvonne; McNerney, Erin; Wetherby, Amy; Halladay, Alycia
2015-01-01
Earlier autism diagnosis, the importance of early intervention, and development of specific interventions for young children have contributed to the emergence of similar, empirically supported, autism interventions that represent the merging of applied behavioral and developmental sciences. "Naturalistic Developmental Behavioral Interventions…
Hybrid and conventional hydrogen engine vehicles that meet EZEV emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aceves, S.M.; Smith, J.R.
In this paper, a time-dependent engine model is used for predicting hydrogen engine efficiency and emissions. The model uses basic thermodynamic equations for the compression and expansion processes, along with an empirical correlation for heat transfer, to predict engine indicated efficiency. A friction correlation and a supercharger/turbocharger model are then used to calculate brake thermal efficiency. The model is validated with many experimental points obtained in a recent evaluation of a hydrogen research engine. A The validated engine model is then used to calculate fuel economy and emissions for three hydrogen-fueled vehicles: a conventional, a parallel hybrid, and a seriesmore » hybrid. All vehicles use liquid hydrogen as a fuel. The hybrid vehicles use a flywheel for energy storage. Comparable ultra capacitor or battery energy storage performance would give similar results. This paper analyzes the engine and flywheel sizing requirements for obtaining a desired level of performance. The results indicate that hydrogen lean-burn spark-ignited engines can provide a high fuel economy and Equivalent Zero Emission Vehicle (EZEV) levels in the three vehicle configurations being analyzed.« less
Simulation Modeling for Off-Nominal Conditions - Where Are We Today?
NASA Technical Reports Server (NTRS)
Shah, Gautam H.; Foster, John V.; Cunningham, Kevin
2010-01-01
The modeling of aircraft flight characteris4cs in off-nominal or otherwise adverse conditions has become increasingly important for simulation in the loss-of-control arena. Adverse conditions include environmentally-induced upsets such as wind shear or wake vortex encounters; off-nominal flight conditions, such as stall or departure; on-board systems failures; and structural failures or aircraft damage. Spirited discussions in the research community are taking place as to the fidelity and data requirements for adequate representation of vehicle dynamics under such conditions for a host of research areas, including recovery training, flight controls development, trajectory guidance/planning, and envelope limiting. The increasing need for multiple sources of data (empirical, computational, experimental) for modeling across a larger flight envelope leads to challenges in developing methods of appropriately applying or combining such data, particularly in a dynamic flight environment with a physically and/or aerodynamically asymmetric vehicle. Traditional simplifications and symmetry assumptions in current modeling methodology may no longer be valid. Furthermore, once modeled, challenges abound in the validation of flight dynamics characteristics in adverse flight regimes
The validation of a human force model to predict dynamic forces resulting from multi-joint motions
NASA Technical Reports Server (NTRS)
Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.
1992-01-01
The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.
Graham, Jesse; Nosek, Brian A.; Haidt, Jonathan; Iyer, Ravi; Koleva, Spassena; Ditto, Peter H.
2010-01-01
The moral domain is broader than the empathy and justice concerns assessed by existing measures of moral competence, and it is not just a subset of the values assessed by value inventories. To fill the need for reliable and theoretically-grounded measurement of the full range of moral concerns, we developed the Moral Foundations Questionnaire (MFQ) based on a theoretical model of five universally available (but variably developed) sets of moral intuitions: Harm/care, Fairness/reciprocity, Ingroup/loyalty, Authority/respect, and Purity/sanctity. We present evidence for the internal and external validity of the scale and the model, and in doing so present new findings about morality: 1. Comparative model fitting of confirmatory factor analyses provides empirical justification for a five-factor structure of moral concerns. 2. Convergent/discriminant validity evidence suggests that moral concerns predict personality features and social group attitudes not previously considered morally relevant. 3. We establish pragmatic validity of the measure in providing new knowledge and research opportunities concerning demographic and cultural differences in moral intuitions. These analyses provide evidence for the usefulness of Moral Foundations Theory in simultaneously increasing the scope and sharpening the resolution of psychological views of morality. PMID:21244182
Active Inference and Learning in the Cerebellum.
Friston, Karl; Herreros, Ivan
2016-09-01
This letter offers a computational account of Pavlovian conditioning in the cerebellum based on active inference and predictive coding. Using eyeblink conditioning as a canonical paradigm, we formulate a minimal generative model that can account for spontaneous blinking, startle responses, and (delay or trace) conditioning. We then establish the face validity of the model using simulated responses to unconditioned and conditioned stimuli to reproduce the sorts of behavior that are observed empirically. The scheme's anatomical validity is then addressed by associating variables in the predictive coding scheme with nuclei and neuronal populations to match the (extrinsic and intrinsic) connectivity of the cerebellar (eyeblink conditioning) system. Finally, we try to establish predictive validity by reproducing selective failures of delay conditioning, trace conditioning, and extinction using (simulated and reversible) focal lesions. Although rather metaphorical, the ensuing scheme can account for a remarkable range of anatomical and neurophysiological aspects of cerebellar circuitry-and the specificity of lesion-deficit mappings that have been established experimentally. From a computational perspective, this work shows how conditioning or learning can be formulated in terms of minimizing variational free energy (or maximizing Bayesian model evidence) using exactly the same principles that underlie predictive coding in perception.
Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements
NASA Astrophysics Data System (ADS)
Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.
2012-12-01
The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.
Psychiatric diagnosis – is it universal or relative to culture?
Canino, Glorisa; Alegría, Margarita
2009-01-01
Background There is little consensus on the extent to which psychiatric disorders or syndromes are universal or the extent to which they differ on their core definitions and constellation of symptoms as a result of cultural or contextual factors. This controversy continues due to the lack of biological markers, imprecise measurement and the lack of a gold standard for validating most psychiatric conditions. Method Empirical studies were used to present evidence in favor of or against a universalist or relativistic view of child psychiatric disorders using a model developed by Robins and Guze to determine the validity of psychiatric disorders. Results The prevalence of some of the most common specific disorders and syndromes as well as its risk and protective factors vary across cultures, yet comorbid patterns and response to treatments vary little across cultures. Cross-cultural longitudinal data on outcomes is equivocal. Conclusions The cross-cultural validity of child disorders may vary drastically depending on the disorder, but empirical evidence that attests for the cross-cultural validity of diagnostic criteria for each child disorder is lacking. There is a need for studies that investigate the extent to which gene–environment interactions are related to specific disorders across cultures. Clinicians are urged to consider culture and context in determining the way in which children’s psychopathology may be manifested independent of their views. Recommendations for the upcoming classificatory system are provided so that practical or theoretical considerations are addressed about how culture and ethnic issues affect the assessment or treatment of specific disorders in children. PMID:18333929
Zhu, Yenan; Hsieh, Yee-Hsee; Dhingra, Rishi R; Dick, Thomas E; Jacono, Frank J; Galán, Roberto F
2013-02-01
Interactions between oscillators can be investigated with standard tools of time series analysis. However, these methods are insensitive to the directionality of the coupling, i.e., the asymmetry of the interactions. An elegant alternative was proposed by Rosenblum and collaborators [M. G. Rosenblum, L. Cimponeriu, A. Bezerianos, A. Patzak, and R. Mrowka, Phys. Rev. E 65, 041909 (2002); M. G. Rosenblum and A. S. Pikovsky, Phys. Rev. E 64, 045202 (2001)] which consists in fitting the empirical phases to a generic model of two weakly coupled phase oscillators. This allows one to obtain the interaction functions defining the coupling and its directionality. A limitation of this approach is that a solution always exists in the least-squares sense, even in the absence of coupling. To preclude spurious results, we propose a three-step protocol: (1) Determine if a statistical dependency exists in the data by evaluating the mutual information of the phases; (2) if so, compute the interaction functions of the oscillators; and (3) validate the empirical oscillator model by comparing the joint probability of the phases obtained from simulating the model with that of the empirical phases. We apply this protocol to a model of two coupled Stuart-Landau oscillators and show that it reliably detects genuine coupling. We also apply this protocol to investigate cardiorespiratory coupling in anesthetized rats. We observe reciprocal coupling between respiration and heartbeat and that the influence of respiration on the heartbeat is generally much stronger than vice versa. In addition, we find that the vagus nerve mediates coupling in both directions.
Martin, Natasha K.; Skaathun, Britt; Vickerman, Peter; Stuart, David
2017-01-01
Background People who inject drugs (PWID) and HIV-infected men who have sex with men (MSM) are key risk groups for hepatitis C virus (HCV) transmission. Mathematical modeling studies can help elucidate what level and combination of prevention intervention scale-up is required to control or eliminate epidemics among these key populations. Methods We discuss the evidence surrounding HCV prevention interventions and provide an overview of the mathematical modeling literature projecting the impact of scaled-up HCV prevention among PWID and HIV-infected MSM. Results Harm reduction interventions such as opiate substitution therapy and needle and syringe programs are effective in reducing HCV incidence among PWID. Modeling and limited empirical data indicate HCV treatment could additionally be used for prevention. No studies have evaluated the effectiveness of behavior change interventions to reduce HCV incidence among MSM, but existing interventions to reduce HIV risk could be effective. Mathematical modeling and empirical data indicates that scale-up of harm reduction could reduce HCV transmission, but in isolation is unlikely to eliminate HCV among PWID. By contrast, elimination is possibly achievable through combination scale-up of harm reduction and HCV treatment. Similarly, among HIV-infected MSM, eliminating the emerging epidemics will likely require HCV treatment scale-up in combination with additional interventions to reduce HCV-related risk behaviors. Conclusions Elimination of HCV will likely require combination prevention efforts among both PWID and HIV-infected MSM populations. Further empirical research is required to validate HCV treatment as prevention among these populations, and to identify effective behavioral interventions to reduce HCV incidence among MSM. PMID:28534885