Sample records for modelling approach applied

  1. A one-model approach based on relaxed combinations of inputs for evaluating input congestion in DEA

    NASA Astrophysics Data System (ADS)

    Khodabakhshi, Mohammad

    2009-08-01

    This paper provides a one-model approach of input congestion based on input relaxation model developed in data envelopment analysis (e.g. [G.R. Jahanshahloo, M. Khodabakhshi, Suitable combination of inputs for improving outputs in DEA with determining input congestion -- Considering textile industry of China, Applied Mathematics and Computation (1) (2004) 263-273; G.R. Jahanshahloo, M. Khodabakhshi, Determining assurance interval for non-Archimedean ele improving outputs model in DEA, Applied Mathematics and Computation 151 (2) (2004) 501-506; M. Khodabakhshi, A super-efficiency model based on improved outputs in data envelopment analysis, Applied Mathematics and Computation 184 (2) (2007) 695-703; M. Khodabakhshi, M. Asgharian, An input relaxation measure of efficiency in stochastic data analysis, Applied Mathematical Modelling 33 (2009) 2010-2023]. This approach reduces solving three problems with the two-model approach introduced in the first of the above-mentioned reference to two problems which is certainly important from computational point of view. The model is applied to a set of data extracted from ISI database to estimate input congestion of 12 Canadian business schools.

  2. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    DTIC Science & Technology

    2008-10-01

    generic tissue compartments in which we have combined diffusion limitation and deep tissue (global tissue model). We also applied a QSAR approach for...SUBJECT TERMS jet fuel, JP-8, PBPK modeling, complex mixtures, nonane, decane, naphthalene, QSAR , alternative fuels 16. SECURITY CLASSIFICATION OF...necessary, to apply to the interaction of specific compounds with specific tissues. We have also applied a QSAR approach for estimating blood and tissue

  3. A simplified approach to quasi-linear viscoelastic modeling

    PubMed Central

    Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.

    2007-01-01

    The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254

  4. A partial Hamiltonian approach for current value Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Naz, R.; Mahomed, F. M.; Chaudhry, Azam

    2014-10-01

    We develop a partial Hamiltonian framework to obtain reductions and closed-form solutions via first integrals of current value Hamiltonian systems of ordinary differential equations (ODEs). The approach is algorithmic and applies to many state and costate variables of the current value Hamiltonian. However, we apply the method to models with one control, one state and one costate variable to illustrate its effectiveness. The current value Hamiltonian systems arise in economic growth theory and other economic models. We explain our approach with the help of a simple illustrative example and then apply it to two widely used economic growth models: the Ramsey model with a constant relative risk aversion (CRRA) utility function and Cobb Douglas technology and a one-sector AK model of endogenous growth are considered. We show that our newly developed systematic approach can be used to deduce results given in the literature and also to find new solutions.

  5. Embracing uncertainty in applied ecology.

    PubMed

    Milner-Gulland, E J; Shea, K

    2017-12-01

    Applied ecologists often face uncertainty that hinders effective decision-making.Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists' ability to plan and execute research to support management.Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty. Synthesis and applications . Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

  6. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  7. A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting

    NASA Astrophysics Data System (ADS)

    Kim, T.; Joo, K.; Seo, J.; Heo, J. H.

    2016-12-01

    Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.

  8. Scenario-based modeling for multiple allocation hub location problem under disruption risk: multiple cuts Benders decomposition approach

    NASA Astrophysics Data System (ADS)

    Yahyaei, Mohsen; Bashiri, Mahdi

    2017-12-01

    The hub location problem arises in a variety of domains such as transportation and telecommunication systems. In many real-world situations, hub facilities are subject to disruption. This paper deals with the multiple allocation hub location problem in the presence of facilities failure. To model the problem, a two-stage stochastic formulation is developed. In the proposed model, the number of scenarios grows exponentially with the number of facilities. To alleviate this issue, two approaches are applied simultaneously. The first approach is to apply sample average approximation to approximate the two stochastic problem via sampling. Then, by applying the multiple cuts Benders decomposition approach, computational performance is enhanced. Numerical studies show the effective performance of the SAA in terms of optimality gap for small problem instances with numerous scenarios. Moreover, performance of multi-cut Benders decomposition is assessed through comparison with the classic version and the computational results reveal the superiority of the multi-cut approach regarding the computational time and number of iterations.

  9. Identification Approach to Alleviate Effects of Unmeasured Heat Gains for MIMO Building Thermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Jie; Kim, Donghun; Braun, James E.

    It is important to have practical methods for constructing a good mathematical model for a building's thermal system for energy audits, retrofit analysis and advanced building controls, e.g. model predictive control. Identification approaches based on semi-physical model structures are popular in building science for those purposes. However conventional gray box identification approaches applied to thermal networks would fail when significant unmeasured heat gains present in estimation data. Although this situation is very common and practical, there has been little research to tackle this issue in building science. This paper presents an overall identification approach to alleviate influences of unmeasured disturbances,more » and hence to obtain improved gray-box building models. The approach was applied to an existing open space building and the performance is demonstrated.« less

  10. Applying the Sport Education Model to Tennis

    ERIC Educational Resources Information Center

    Ayvazo, Shiri

    2009-01-01

    The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…

  11. A nonlinear interface model applied to masonry structures

    NASA Astrophysics Data System (ADS)

    Lebon, Frédéric; Raffa, Maria Letizia; Rizzoni, Raffaella

    2015-12-01

    In this paper, a new imperfect interface model is presented. The model includes finite strains, micro-cracks and smooth roughness. The model is consistently derived by coupling a homogenization approach for micro-cracked media and arguments of asymptotic analysis. The model is applied to brick/mortar interfaces. Numerical results are presented.

  12. A comparison of two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund

    NASA Astrophysics Data System (ADS)

    Luks, B.; Osuch, M.; Romanowicz, R. J.

    2012-04-01

    We compare two approaches to modelling snow cover dynamics at the Polish Polar Station at Hornsund. In the first approach we apply physically-based Utah Energy Balance Snow Accumulation and Melt Model (UEB) (Tarboton et al., 1995; Tarboton and Luce, 1996). The model uses a lumped representation of the snowpack with two primary state variables: snow water equivalence and energy. Its main driving inputs are: air temperature, precipitation, wind speed, humidity and radiation (estimated from the diurnal temperature range). Those variables are used for physically-based calculations of radiative, sensible, latent and advective heat exchanges with a 3 hours time step. The second method is an application of a statistically efficient lumped parameter time series approach to modelling the dynamics of snow cover , based on daily meteorological measurements from the same area. A dynamic Stochastic Transfer Function model is developed that follows the Data Based Mechanistic approach, where a stochastic data-based identification of model structure and an estimation of its parameters are followed by a physical interpretation. We focus on the analysis of uncertainty of both model outputs. In the time series approach, the applied techniques also provide estimates of the modeling errors and the uncertainty of the model parameters. In the first, physically-based approach the applied UEB model is deterministic. It assumes that the observations are without errors and that the model structure perfectly describes the processes within the snowpack. To take into account the model and observation errors, we applied a version of the Generalized Likelihood Uncertainty Estimation technique (GLUE). This technique also provide estimates of the modelling errors and the uncertainty of the model parameters. The observed snowpack water equivalent values are compared with those simulated with 95% confidence bounds. This work was supported by National Science Centre of Poland (grant no. 7879/B/P01/2011/40). Tarboton, D. G., T. G. Chowdhury and T. H. Jackson, 1995. A Spatially Distributed Energy Balance Snowmelt Model. In K. A. Tonnessen, M. W. Williams and M. Tranter (Ed.), Proceedings of a Boulder Symposium, July 3-14, IAHS Publ. no. 228, pp. 141-155. Tarboton, D. G. and C. H. Luce, 1996. Utah Energy Balance Snow Accumulation and Melt Model (UEB). Computer model technical description and users guide, Utah Water Research Laboratory and USDA Forest Service Intermountain Research Station (http://www.engineering.usu.edu/dtarb/). 64 pp.

  13. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  14. Lagrange multiplier for perishable inventory model considering warehouse capacity planning

    NASA Astrophysics Data System (ADS)

    Amran, Tiena Gustina; Fatima, Zenny

    2017-06-01

    This paper presented Lagrange Muktiplier approach for solving perishable raw material inventory planning considering warehouse capacity. A food company faced an issue of managing perishable raw materials and marinades which have limited shelf life. Another constraint to be considered was the capacity of the warehouse. Therefore, an inventory model considering shelf life and raw material warehouse capacity are needed in order to minimize the company's inventory cost. The inventory model implemented in this study was the adapted economic order quantity (EOQ) model which is optimized using Lagrange multiplier. The model and solution approach were applied to solve a case industry in a food manufacturer. The result showed that the total inventory cost decreased 2.42% after applying the proposed approach.

  15. Analyzing Dyadic Sequence Data—Research Questions and Implied Statistical Models

    PubMed Central

    Fuchs, Peter; Nussbeck, Fridtjof W.; Meuwly, Nathalie; Bodenmann, Guy

    2017-01-01

    The analysis of observational data is often seen as a key approach to understanding dynamics in romantic relationships but also in dyadic systems in general. Statistical models for the analysis of dyadic observational data are not commonly known or applied. In this contribution, selected approaches to dyadic sequence data will be presented with a focus on models that can be applied when sample sizes are of medium size (N = 100 couples or less). Each of the statistical models is motivated by an underlying potential research question, the most important model results are presented and linked to the research question. The following research questions and models are compared with respect to their applicability using a hands on approach: (I) Is there an association between a particular behavior by one and the reaction by the other partner? (Pearson Correlation); (II) Does the behavior of one member trigger an immediate reaction by the other? (aggregated logit models; multi-level approach; basic Markov model); (III) Is there an underlying dyadic process, which might account for the observed behavior? (hidden Markov model); and (IV) Are there latent groups of dyads, which might account for observing different reaction patterns? (mixture Markov; optimal matching). Finally, recommendations for researchers to choose among the different models, issues of data handling, and advises to apply the statistical models in empirical research properly are given (e.g., in a new r-package “DySeq”). PMID:28443037

  16. Learning and Information Approaches for Inference in Dynamic Data-Driven Geophysical Applications

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2015-12-01

    Many Geophysical inference problems are characterized by non-linear processes, high-dimensional models and complex uncertainties. A dynamic coupling between models, estimation, and sampling is typically sought to efficiently characterize and reduce uncertainty. This process is however fraught with several difficulties. Among them, the key difficulties are the ability to deal with model errors, efficacy of uncertainty quantification and data assimilation. In this presentation, we present three key ideas from learning and intelligent systems theory and apply them to two geophysical applications. The first idea is the use of Ensemble Learning to compensate for model error, the second is to develop tractable Information Theoretic Learning to deal with non-Gaussianity in inference, and the third is a Manifold Resampling technique for effective uncertainty quantification. We apply these methods, first to the development of a cooperative autonomous observing system using sUAS for studying coherent structures. We apply this to Second, we apply this to the problem of quantifying risk from hurricanes and storm surges in a changing climate. Results indicate that learning approaches can enable new effectiveness in cases where standard approaches to model reduction, uncertainty quantification and data assimilation fail.

  17. Didactical suggestion for a Dynamic Hybrid Intelligent e-Learning Environment (DHILE) applying the PENTHA ID Model

    NASA Astrophysics Data System (ADS)

    dall'Acqua, Luisa

    2011-08-01

    The teleology of our research is to propose a solution to the request of "innovative, creative teaching", proposing a methodology to educate creative Students in a society characterized by multiple reference points and hyper dynamic knowledge, continuously subject to reviews and discussions. We apply a multi-prospective Instructional Design Model (PENTHA ID Model), defined and developed by our research group, which adopts a hybrid pedagogical approach, consisting of elements of didactical connectivism intertwined with aspects of social constructivism and enactivism. The contribution proposes an e-course structure and approach, applying the theoretical design principles of the above mentioned ID Model, describing methods, techniques, technologies and assessment criteria for the definition of lesson modes in an e-course.

  18. Causal Models for Mediation Analysis: An Introduction to Structural Mean Models.

    PubMed

    Zheng, Cheng; Atkins, David C; Zhou, Xiao-Hua; Rhew, Isaac C

    2015-01-01

    Mediation analyses are critical to understanding why behavioral interventions work. To yield a causal interpretation, common mediation approaches must make an assumption of "sequential ignorability." The current article describes an alternative approach to causal mediation called structural mean models (SMMs). A specific SMM called a rank-preserving model (RPM) is introduced in the context of an applied example. Particular attention is given to the assumptions of both approaches to mediation. Applying both mediation approaches to the college student drinking data yield notable differences in the magnitude of effects. Simulated examples reveal instances in which the traditional approach can yield strongly biased results, whereas the RPM approach remains unbiased in these cases. At the same time, the RPM approach has its own assumptions that must be met for correct inference, such as the existence of a covariate that strongly moderates the effect of the intervention on the mediator and no unmeasured confounders that also serve as a moderator of the effect of the intervention or the mediator on the outcome. The RPM approach to mediation offers an alternative way to perform mediation analysis when there may be unmeasured confounders.

  19. Bayesian Exploratory Factor Analysis

    PubMed Central

    Conti, Gabriella; Frühwirth-Schnatter, Sylvia; Heckman, James J.; Piatek, Rémi

    2014-01-01

    This paper develops and applies a Bayesian approach to Exploratory Factor Analysis that improves on ad hoc classical approaches. Our framework relies on dedicated factor models and simultaneously determines the number of factors, the allocation of each measurement to a unique factor, and the corresponding factor loadings. Classical identification criteria are applied and integrated into our Bayesian procedure to generate models that are stable and clearly interpretable. A Monte Carlo study confirms the validity of the approach. The method is used to produce interpretable low dimensional aggregates from a high dimensional set of psychological measurements. PMID:25431517

  20. Accounting for the dissociating properties of organic chemicals in LCIA: an uncertainty analysis applied to micropollutants in the assessment of freshwater ecotoxicity.

    PubMed

    Morais, Sérgio Alberto; Delerue-Matos, Cristina; Gabarrell, Xavier

    2013-03-15

    In life cycle impact assessment (LCIA) models, the sorption of the ionic fraction of dissociating organic chemicals is not adequately modeled because conventional non-polar partitioning models are applied. Therefore, high uncertainties are expected when modeling the mobility, as well as the bioavailability for uptake by exposed biota and degradation, of dissociating organic chemicals. Alternative regressions that account for the ionized fraction of a molecule to estimate fate parameters were applied to the USEtox model. The most sensitive model parameters in the estimation of ecotoxicological characterization factors (CFs) of micropollutants were evaluated by Monte Carlo analysis in both the default USEtox model and the alternative approach. Negligible differences of CFs values and 95% confidence limits between the two approaches were estimated for direct emissions to the freshwater compartment; however the default USEtox model overestimates CFs and the 95% confidence limits of basic compounds up to three orders and four orders of magnitude, respectively, relatively to the alternative approach for emissions to the agricultural soil compartment. For three emission scenarios, LCIA results show that the default USEtox model overestimates freshwater ecotoxicity impacts for the emission scenarios to agricultural soil by one order of magnitude, and larger confidence limits were estimated, relatively to the alternative approach. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Modeling Latent Interactions at Level 2 in Multilevel Structural Equation Models: An Evaluation of Mean-Centered and Residual-Centered Unconstrained Approaches

    ERIC Educational Resources Information Center

    Leite, Walter L.; Zuo, Youzhen

    2011-01-01

    Among the many methods currently available for estimating latent variable interactions, the unconstrained approach is attractive to applied researchers because of its relatively easy implementation with any structural equation modeling (SEM) software. Using a Monte Carlo simulation study, we extended and evaluated the unconstrained approach to…

  2. Applying Meta-Analysis to Structural Equation Modeling

    ERIC Educational Resources Information Center

    Hedges, Larry V.

    2016-01-01

    Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines…

  3. An effective model for ergonomic optimization applied to a new automotive assembly line

    NASA Astrophysics Data System (ADS)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-01

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assembly line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.

  4. A Constructive Neural-Network Approach to Modeling Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2012-01-01

    This article reviews a particular computational modeling approach to the study of psychological development--that of constructive neural networks. This approach is applied to a variety of developmental domains and issues, including Piagetian tasks, shift learning, language acquisition, number comparison, habituation of visual attention, concept…

  5. Combustion system CFD modeling at GE Aircraft Engines

    NASA Technical Reports Server (NTRS)

    Burrus, D.; Mongia, H.; Tolpadi, Anil K.; Correa, S.; Braaten, M.

    1995-01-01

    This viewgraph presentation discusses key features of current combustion system CFD modeling capabilities at GE Aircraft Engines provided by the CONCERT code; CONCERT development history; modeling applied for designing engine combustion systems; modeling applied to improve fundamental understanding; CONCERT3D results for current production combustors; CONCERT3D model of NASA/GE E3 combustor; HYBRID CONCERT CFD/Monte-Carlo modeling approach; and future modeling directions.

  6. Combustion system CFD modeling at GE Aircraft Engines

    NASA Astrophysics Data System (ADS)

    Burrus, D.; Mongia, H.; Tolpadi, Anil K.; Correa, S.; Braaten, M.

    1995-03-01

    This viewgraph presentation discusses key features of current combustion system CFD modeling capabilities at GE Aircraft Engines provided by the CONCERT code; CONCERT development history; modeling applied for designing engine combustion systems; modeling applied to improve fundamental understanding; CONCERT3D results for current production combustors; CONCERT3D model of NASA/GE E3 combustor; HYBRID CONCERT CFD/Monte-Carlo modeling approach; and future modeling directions.

  7. MASS BALANCE MODELLING OF PCBS IN THE FOX RIVER/GREEN BAY COMPLEX

    EPA Science Inventory

    The USEPA Office of Research and Development developed and applies a multimedia, mass balance modeling approach to the Fox River/Green Bay complex to aid managers with remedial decision-making. The suite of models were applied to PCBs due to the long history of contamination and ...

  8. Trans-dimensional matched-field geoacoustic inversion with hierarchical error models and interacting Markov chains.

    PubMed

    Dettmer, Jan; Dosso, Stan E

    2012-10-01

    This paper develops a trans-dimensional approach to matched-field geoacoustic inversion, including interacting Markov chains to improve efficiency and an autoregressive model to account for correlated errors. The trans-dimensional approach and hierarchical seabed model allows inversion without assuming any particular parametrization by relaxing model specification to a range of plausible seabed models (e.g., in this case, the number of sediment layers is an unknown parameter). Data errors are addressed by sampling statistical error-distribution parameters, including correlated errors (covariance), by applying a hierarchical autoregressive error model. The well-known difficulty of low acceptance rates for trans-dimensional jumps is addressed with interacting Markov chains, resulting in a substantial increase in efficiency. The trans-dimensional seabed model and the hierarchical error model relax the degree of prior assumptions required in the inversion, resulting in substantially improved (more realistic) uncertainty estimates and a more automated algorithm. In particular, the approach gives seabed parameter uncertainty estimates that account for uncertainty due to prior model choice (layering and data error statistics). The approach is applied to data measured on a vertical array in the Mediterranean Sea.

  9. Development of a time-dependent hurricane evacuation model for the New Orleans area : research project capsule.

    DOT National Transportation Integrated Search

    2008-08-01

    Current hurricane evacuation transportation modeling uses an approach fashioned after the : traditional four-step procedure applied in urban transportation planning. One of the limiting : features of this approach is that it models traffic in a stati...

  10. A comparison of operational remote sensing-based models for estimating crop evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    The integration of remotely sensed data into models of actual evapotranspiration has allowed for the estimation of water consumption across agricultural regions. Two modeling approaches have been successfully applied. The first approach computes a surface energy balance using the radiometric surface...

  11. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  12. Markowitz portfolio optimization model employing fuzzy measure

    NASA Astrophysics Data System (ADS)

    Ramli, Suhailywati; Jaaman, Saiful Hafizah

    2017-04-01

    Markowitz in 1952 introduced the mean-variance methodology for the portfolio selection problems. His pioneering research has shaped the portfolio risk-return model and become one of the most important research fields in modern finance. This paper extends the classical Markowitz's mean-variance portfolio selection model applying the fuzzy measure to determine the risk and return. In this paper, we apply the original mean-variance model as a benchmark, fuzzy mean-variance model with fuzzy return and the model with return are modeled by specific types of fuzzy number for comparison. The model with fuzzy approach gives better performance as compared to the mean-variance approach. The numerical examples are included to illustrate these models by employing Malaysian share market data.

  13. A Tropospheric Emission Spectrometer HDO/H2O Retrieval Simulator for Climate Models

    NASA Technical Reports Server (NTRS)

    Field, R. D.; Risi, C.; Schmidt, G. A.; Worden, J.; Voulgarakis, A.; LeGrande, A. N.; Sobel, A. H.; Healy, R. J.

    2012-01-01

    Retrievals of the isotopic composition of water vapor from the Aura Tropospheric Emission Spectrometer (TES) have unique value in constraining moist processes in climate models. Accurate comparison between simulated and retrieved values requires that model profiles that would be poorly retrieved are excluded, and that an instrument operator be applied to the remaining profiles. Typically, this is done by sampling model output at satellite measurement points and using the quality flags and averaging kernels from individual retrievals at specific places and times. This approach is not reliable when the model meteorological conditions influencing retrieval sensitivity are different from those observed by the instrument at short time scales, which will be the case for free-running climate simulations. In this study, we describe an alternative, categorical approach to applying the instrument operator, implemented within the NASA GISS ModelE general circulation model. Retrieval quality and averaging kernel structure are predicted empirically from model conditions, rather than obtained from collocated satellite observations. This approach can be used for arbitrary model configurations, and requires no agreement between satellite-retrieved and model meteorology at short time scales. To test this approach, nudged simUlations were conducted using both the retrieval-based and categorical operators. Cloud cover, surface temperature and free-tropospheric moisture content were the most important predictors of retrieval quality and averaging kernel structure. There was good agreement between the D fields after applying the retrieval-based and more detailed categorical operators, with increases of up to 30 over the ocean and decreases of up to 40 over land relative to the raw model fields. The categorical operator performed better over the ocean than over land, and requires further refinement for use outside of the tropics. After applying the TES operator, ModelE had D biases of 8 over ocean and 34 over land compared to TES D, which were less than the biases using raw model D fields.

  14. Applying the Network Simulation Method for testing chaos in a resistively and capacitively shunted Josephson junction model

    NASA Astrophysics Data System (ADS)

    Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel

    In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.

  15. Linear regression crash prediction models : issues and proposed solutions.

    DOT National Transportation Integrated Search

    2010-05-01

    The paper develops a linear regression model approach that can be applied to : crash data to predict vehicle crashes. The proposed approach involves novice data aggregation : to satisfy linear regression assumptions; namely error structure normality ...

  16. An Analytical Approach to Salary Evaluation for Educational Personnel

    ERIC Educational Resources Information Center

    Bruno, James Edward

    1969-01-01

    "In this study a linear programming model for determining an 'optimal' salary schedule was derived then applied to an educational salary structure. The validity of the model and the effectiveness of the approach were established. (Author)

  17. Spatial modeling in ecology: the flexibility of eigenfunction spatial analyses.

    PubMed

    Griffith, Daniel A; Peres-Neto, Pedro R

    2006-10-01

    Recently, analytical approaches based on the eigenfunctions of spatial configuration matrices have been proposed in order to consider explicitly spatial predictors. The present study demonstrates the usefulness of eigenfunctions in spatial modeling applied to ecological problems and shows equivalencies of and differences between the two current implementations of this methodology. The two approaches in this category are the distance-based (DB) eigenvector maps proposed by P. Legendre and his colleagues, and spatial filtering based upon geographic connectivity matrices (i.e., topology-based; CB) developed by D. A. Griffith and his colleagues. In both cases, the goal is to create spatial predictors that can be easily incorporated into conventional regression models. One important advantage of these two approaches over any other spatial approach is that they provide a flexible tool that allows the full range of general and generalized linear modeling theory to be applied to ecological and geographical problems in the presence of nonzero spatial autocorrelation.

  18. Development of uncertainty-based work injury model using Bayesian structural equation modelling.

    PubMed

    Chatterjee, Snehamoy

    2014-01-01

    This paper proposed a Bayesian method-based structural equation model (SEM) of miners' work injury for an underground coal mine in India. The environmental and behavioural variables for work injury were identified and causal relationships were developed. For Bayesian modelling, prior distributions of SEM parameters are necessary to develop the model. In this paper, two approaches were adopted to obtain prior distribution for factor loading parameters and structural parameters of SEM. In the first approach, the prior distributions were considered as a fixed distribution function with specific parameter values, whereas, in the second approach, prior distributions of the parameters were generated from experts' opinions. The posterior distributions of these parameters were obtained by applying Bayesian rule. The Markov Chain Monte Carlo sampling in the form Gibbs sampling was applied for sampling from the posterior distribution. The results revealed that all coefficients of structural and measurement model parameters are statistically significant in experts' opinion-based priors, whereas, two coefficients are not statistically significant when fixed prior-based distributions are applied. The error statistics reveals that Bayesian structural model provides reasonably good fit of work injury with high coefficient of determination (0.91) and less mean squared error as compared to traditional SEM.

  19. Model Uncertainty and Robustness: A Computational Framework for Multimodel Analysis

    ERIC Educational Resources Information Center

    Young, Cristobal; Holsteen, Katherine

    2017-01-01

    Model uncertainty is pervasive in social science. A key question is how robust empirical results are to sensible changes in model specification. We present a new approach and applied statistical software for computational multimodel analysis. Our approach proceeds in two steps: First, we estimate the modeling distribution of estimates across all…

  20. An integrated model of learning.

    PubMed

    Trigg, A M; Cordova, F D

    1987-01-01

    Worldwide, most educational systems are based on three levels of education that utilize the pedagogical approaches to learning. In the 1960s, scholars formulated another approach to education that has become known as andragogy and has been applied to adult education. Several innovative scholars have seen how andragogy can be applied to teaching children. As a result, both andragogy and pedagogy are viewed as the opposite ends of the educational spectrum. Both of these approaches have a place and function within the modern educational framework. If one assumes that the goal of education is for the acquisition and application of knowledge, then both of these approaches can be used effectively for the attainment of that goal. In order to utilize these approaches effectively, an integrated model of learning has been developed that consists of initial teaching and exploratory learning phases. This model has both the directive and flexible qualities found in the theories of pedagogy and andragogy. With careful consideration and analysis this educational model can be utilized effectively within most educational systems.

  1. An effective model for ergonomic optimization applied to a new automotive assembly line

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duraccio, Vincenzo; Elia, Valerio; Forcina, Antonio

    2016-06-08

    An efficient ergonomic optimization can lead to a significant improvement in production performance and a considerable reduction of costs. In the present paper new model for ergonomic optimization is proposed. The new approach is based on the criteria defined by National Institute of Occupational Safety and Health and, adapted to Italian legislation. The proposed model provides an ergonomic optimization, by analyzing ergonomic relations between manual work in correct conditions. The model includes a schematic and systematic analysis method of the operations, and identifies all possible ergonomic aspects to be evaluated. The proposed approach has been applied to an automotive assemblymore » line, where the operation repeatability makes the optimization fundamental. The proposed application clearly demonstrates the effectiveness of the new approach.« less

  2. Bayesian evidence computation for model selection in non-linear geoacoustic inference problems.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Osler, John C

    2010-12-01

    This paper applies a general Bayesian inference approach, based on Bayesian evidence computation, to geoacoustic inversion of interface-wave dispersion data. Quantitative model selection is carried out by computing the evidence (normalizing constants) for several model parameterizations using annealed importance sampling. The resulting posterior probability density estimate is compared to estimates obtained from Metropolis-Hastings sampling to ensure consistent results. The approach is applied to invert interface-wave dispersion data collected on the Scotian Shelf, off the east coast of Canada for the sediment shear-wave velocity profile. Results are consistent with previous work on these data but extend the analysis to a rigorous approach including model selection and uncertainty analysis. The results are also consistent with core samples and seismic reflection measurements carried out in the area.

  3. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    EPA Science Inventory

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predic...

  4. Think Pair Share Using Realistic Mathematics Education Approach in Geometry Learning

    NASA Astrophysics Data System (ADS)

    Afthina, H.; Mardiyana; Pramudya, I.

    2017-09-01

    This research aims to determine the impact of mathematics learning applying Think Pair Share (TPS) using Realistic Mathematics Education (RME) viewed from mathematical-logical intelligence in geometry learning. Method that used in this research is quasi experimental research The result of this research shows that (1) mathematics achievement applying TPS using RME approach gives a better result than those applying direct learning model; (2) students with high mathematical-logical intelligence can reach a better mathematics achievement than those with average and low one, whereas students with average mathematical-logical intelligence can reach a better achievement than those with low one; (3) there is no interaction between learning model and the level of students’ mathematical-logical intelligence in giving a mathematics achievement. The impact of this research is that TPS model using RME approach can be applied in mathematics learning so that students can learn more actively and understand the material more, and mathematics learning become more meaningful. On the other hand, internal factors of students must become a consideration toward the success of students’ mathematical achievement particularly in geometry material.

  5. A Dyadic Approach: Applying a Developmental-Conceptual Model to Couples Coping with Chronic Illness

    ERIC Educational Resources Information Center

    Checton, Maria G.; Magsamen-Conrad, Kate; Venetis, Maria K.; Greene, Kathryn

    2015-01-01

    The purpose of the present study was to apply Berg and Upchurch's developmental-conceptual model toward a better understanding of how couples cope with chronic illness. Specifically, a model was hypothesized in which proximal factors (relational quality), dyadic appraisal (illness interference), and dyadic coping (partner support) influence…

  6. An Alternative Approach to the Operation of Multinational Reservoir Systems: Application to the Amistad & Falcon System (Lower Rio Grande/Rí-o Bravo)

    NASA Astrophysics Data System (ADS)

    Serrat-Capdevila, A.; Valdes, J. B.

    2005-12-01

    An optimization approach for the operation of international multi-reservoir systems is presented. The approach uses Stochastic Dynamic Programming (SDP) algorithms, both steady-state and real-time, to develop two models. In the first model, the reservoirs and flows of the system are aggregated to yield an equivalent reservoir, and the obtained operating policies are disaggregated using a non-linear optimization procedure for each reservoir and for each nation water balance. In the second model a multi-reservoir approach is applied, disaggregating the releases for each country water share in each reservoir. The non-linear disaggregation algorithm uses SDP-derived operating policies as boundary conditions for a local time-step optimization. Finally, the performance of the different approaches and methods is compared. These models are applied to the Amistad-Falcon International Reservoir System as part of a binational dynamic modeling effort to develop a decision support system tool for a better management of the water resources in the Lower Rio Grande Basin, currently enduring a severe drought.

  7. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.

    PubMed

    Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  8. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation.

    PubMed

    Telang, Pankaj R; Kalia, Anup K; Singh, Munindar P

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7-each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student's t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel.

  9. Modeling Healthcare Processes Using Commitments: An Empirical Evaluation

    PubMed Central

    2015-01-01

    The two primary objectives of this paper are: (a) to demonstrate how Comma, a business modeling methodology based on commitments, can be applied in healthcare process modeling, and (b) to evaluate the effectiveness of such an approach in producing healthcare process models. We apply the Comma approach on a breast cancer diagnosis process adapted from an HHS committee report, and presents the results of an empirical study that compares Comma with a traditional approach based on the HL7 Messaging Standard (Traditional-HL7). Our empirical study involved 47 subjects, and two phases. In the first phase, we partitioned the subjects into two approximately equal groups. We gave each group the same requirements based on a process scenario for breast cancer diagnosis. Members of one group first applied Traditional-HL7 and then Comma whereas members of the second group first applied Comma and then Traditional-HL7—each on the above-mentioned requirements. Thus, each subject produced two models, each model being a set of UML Sequence Diagrams. In the second phase, we repartitioned the subjects into two groups with approximately equal distributions from both original groups. We developed exemplar Traditional-HL7 and Comma models; we gave one repartitioned group our Traditional-HL7 model and the other repartitioned group our Comma model. We provided the same changed set of requirements to all subjects and asked them to modify the provided exemplar model to satisfy the new requirements. We assessed solutions produced by subjects in both phases with respect to measures of flexibility, time, difficulty, objective quality, and subjective quality. Our study found that Comma is superior to Traditional-HL7 in flexibility and objective quality as validated via Student’s t-test to the 10% level of significance. Comma is a promising new approach for modeling healthcare processes. Further gains could be made through improved tooling and enhanced training of modeling personnel. PMID:26539985

  10. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    ERIC Educational Resources Information Center

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  11. A dynamic mechanical analysis technique for porous media

    PubMed Central

    Pattison, Adam J; McGarry, Matthew; Weaver, John B; Paulsen, Keith D

    2015-01-01

    Dynamic mechanical analysis (DMA) is a common way to measure the mechanical properties of materials as functions of frequency. Traditionally, a viscoelastic mechanical model is applied and current DMA techniques fit an analytical approximation to measured dynamic motion data by neglecting inertial forces and adding empirical correction factors to account for transverse boundary displacements. Here, a finite element (FE) approach to processing DMA data was developed to estimate poroelastic material properties. Frequency-dependent inertial forces, which are significant in soft media and often neglected in DMA, were included in the FE model. The technique applies a constitutive relation to the DMA measurements and exploits a non-linear inversion to estimate the material properties in the model that best fit the model response to the DMA data. A viscoelastic version of this approach was developed to validate the approach by comparing complex modulus estimates to the direct DMA results. Both analytical and FE poroelastic models were also developed to explore their behavior in the DMA testing environment. All of the models were applied to tofu as a representative soft poroelastic material that is a common phantom in elastography imaging studies. Five samples of three different stiffnesses were tested from 1 – 14 Hz with rough platens placed on the top and bottom surfaces of the material specimen under test to restrict transverse displacements and promote fluid-solid interaction. The viscoelastic models were identical in the static case, and nearly the same at frequency with inertial forces accounting for some of the discrepancy. The poroelastic analytical method was not sufficient when the relevant physical boundary constraints were applied, whereas the poroelastic FE approach produced high quality estimates of shear modulus and hydraulic conductivity. These results illustrated appropriate shear modulus contrast between tofu samples and yielded a consistent contrast in hydraulic conductivity as well. PMID:25248170

  12. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  13. A Comprehensive Planning Model

    ERIC Educational Resources Information Center

    Temkin, Sanford

    1972-01-01

    Combines elements of the problem solving approach inherent in methods of applied economics and operations research and the structural-functional analysis common in social science modeling to develop an approach for economic planning and resource allocation for schools and other public sector organizations. (Author)

  14. A combined disease management and process modeling approach for assessing and improving care processes: a fall management case-study.

    PubMed

    Askari, Marjan; Westerhof, Richard; Eslami, Saied; Medlock, Stephanie; de Rooij, Sophia E; Abu-Hanna, Ameen

    2013-10-01

    To propose a combined disease management and process modeling approach for evaluating and improving care processes, and demonstrate its usability and usefulness in a real-world fall management case study. We identified essential disease management related concepts and mapped them into explicit questions meant to expose areas for improvement in the respective care processes. We applied the disease management oriented questions to a process model of a comprehensive real world fall prevention and treatment program covering primary and secondary care. We relied on interviews and observations to complete the process models, which were captured in UML activity diagrams. A preliminary evaluation of the usability of our approach by gauging the experience of the modeler and an external validator was conducted, and the usefulness of the method was evaluated by gathering feedback from stakeholders at an invitational conference of 75 attendees. The process model of the fall management program was organized around the clinical tasks of case finding, risk profiling, decision making, coordination and interventions. Applying the disease management questions to the process models exposed weaknesses in the process including: absence of program ownership, under-detection of falls in primary care, and lack of efficient communication among stakeholders due to missing awareness about other stakeholders' workflow. The modelers experienced the approach as usable and the attendees of the invitational conference found the analysis results to be valid. The proposed disease management view of process modeling was usable and useful for systematically identifying areas of improvement in a fall management program. Although specifically applied to fall management, we believe our case study is characteristic of various disease management settings, suggesting the wider applicability of the approach. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Sustainable intensification: a multifaceted, systemic approach to international development.

    PubMed

    Himmelstein, Jennifer; Ares, Adrian; van Houweling, Emily

    2016-12-01

    Sustainable intensification (SI) is a term increasingly used to describe a type of approach applied to international agricultural projects. Despite its widespread use, there is still little understanding or knowledge of the various facets of this composite paradigm. A review of the literature has led to the formalization of three principles that convey the current characterization of SI, comprising a whole system, participatory, agroecological approach. Specific examples of potential bottlenecks to the SI approach are cited, in addition to various technologies and techniques that can be applied to overcome these obstacles. Models of similar, succcessful approaches to agricultural development are examined, along with higher level processes. Additionally, this review explores the desired end points of SI and argues for the inclusion of gender and nutrition throughout the process. To properly apply the SI approach, its various aspects need to be understood and adapted to different cultural and geographic situations. New modeling systems and examples of the effective execution of SI strategies can assist with the successful application of the SI paradigm within complex developing communities. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  16. Comparative lifecycle assessment of alternatives for waste management in Rio de Janeiro - Investigating the influence of an attributional or consequential approach.

    PubMed

    Bernstad Saraiva, A; Souza, R G; Valle, R A B

    2017-10-01

    The environmental impacts from three management alternatives for organic fraction of municipal solid waste were compared using lifecycle assessment methodology. The alternatives (sanitary landfill, selective collection of organic waste for anaerobic digestion and anaerobic digestion after post-separation of organic waste) were modelled applying an attributional as well as consequential approach, in parallel with the aim of identifying if and how these approaches can affect results and conclusions. The marginal processes identified in the consequential modelling were in general associated with higher environmental impacts than average processes modelled with an attributional approach. As all investigated waste management alternatives result in net-substitution of energy and in some cases also materials, the consequential modelling resulted in lower absolute environmental impacts in five of the seven environmental impact categories assessed in the study. In three of these, the chosen modelling approach can alter the hierarchy between compared waste management alternatives. This indicates a risk of underestimating potential benefits from efficient energy recovery from waste when applying attributional modelling in contexts in which electricity provision historically has been dominated by technologies presenting rather low environmental impacts, but where projections point at increasing impacts from electricity provision in coming years. Thus, in the present case study, the chosen approach affects both absolute and relative results from the comparison. However, results were largely related to the processes identified as affected by investigated changes, and not merely the chosen modelling approach. The processes actually affected by future choices between different waste management alternatives are intrinsically uncertain. The study demonstrates the benefits of applying different assumptions regarding the processes affected by investigated choices - both for provision of energy and materials substituted by waste management processes in consequential LCA modelling, in order to present outcomes that are relevant as decision support within the waste management sector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Segmentation of kidney using C-V model and anatomy priors

    NASA Astrophysics Data System (ADS)

    Lu, Jinghua; Chen, Jie; Zhang, Juan; Yang, Wenjia

    2007-12-01

    This paper presents an approach for kidney segmentation on abdominal CT images as the first step of a virtual reality surgery system. Segmentation for medical images is often challenging because of the objects' complicated anatomical structures, various gray levels, and unclear edges. A coarse to fine approach has been applied in the kidney segmentation using Chan-Vese model (C-V model) and anatomy prior knowledge. In pre-processing stage, the candidate kidney regions are located. Then C-V model formulated by level set method is applied in these smaller ROI, which can reduce the calculation complexity to a certain extent. At last, after some mathematical morphology procedures, the specified kidney structures have been extracted interactively with prior knowledge. The satisfying results on abdominal CT series show that the proposed approach keeps all the advantages of C-V model and overcome its disadvantages.

  18. Constrained off-line synthesis approach of model predictive control for networked control systems with network-induced delays.

    PubMed

    Tang, Xiaoming; Qu, Hongchun; Wang, Ping; Zhao, Meng

    2015-03-01

    This paper investigates the off-line synthesis approach of model predictive control (MPC) for a class of networked control systems (NCSs) with network-induced delays. A new augmented model which can be readily applied to time-varying control law, is proposed to describe the NCS where bounded deterministic network-induced delays may occur in both sensor to controller (S-A) and controller to actuator (C-A) links. Based on this augmented model, a sufficient condition of the closed-loop stability is derived by applying the Lyapunov method. The off-line synthesis approach of model predictive control is addressed using the stability results of the system, which explicitly considers the satisfaction of input and state constraints. Numerical example is given to illustrate the effectiveness of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Contribution to the modelling and analysis of logistics system performance by Petri nets and simulation models: Application in a supply chain

    NASA Astrophysics Data System (ADS)

    Azougagh, Yassine; Benhida, Khalid; Elfezazi, Said

    2016-02-01

    In this paper, the focus is on studying the performance of complex systems in a supply chain context by developing a structured modelling approach based on the methodology ASDI (Analysis, Specification, Design and Implementation) by combining the modelling by Petri nets and simulation using ARENA. The linear approach typically followed in conducting of this kind of problems has to cope with a difficulty of modelling due to the complexity and the number of parameters of concern. Therefore, the approach used in this work is able to structure modelling a way to cover all aspects of the performance study. The modelling structured approach is first introduced before being applied to the case of an industrial system in the field of phosphate. Results of the performance indicators obtained from the models developed, permitted to test the behaviour and fluctuations of this system and to develop improved models of the current situation. In addition, in this paper, it was shown how Arena software can be adopted to simulate complex systems effectively. The method in this research can be applied to investigate various improvements scenarios and their consequences before implementing them in reality.

  20. Bridging process-based and empirical approaches to modeling tree growth

    Treesearch

    Harry T. Valentine; Annikki Makela; Annikki Makela

    2005-01-01

    The gulf between process-based and empirical approaches to modeling tree growth may be bridged, in part, by the use of a common model. To this end, we have formulated a process-based model of tree growth that can be fitted and applied in an empirical mode. The growth model is grounded in pipe model theory and an optimal control model of crown development. Together, the...

  1. Applying a Conceptual Model in Sport Sector Work- Integrated Learning Contexts

    ERIC Educational Resources Information Center

    Agnew, Deborah; Pill, Shane; Orrell, Janice

    2017-01-01

    This paper applies a conceptual model for work-integrated learning (WIL) in a multidisciplinary sports degree program. Two examples of WIL in sport will be used to illustrate how the conceptual WIL model is being operationalized. The implications for practice are that curriculum design must recognize a highly flexible approach to the nature of…

  2. Exploring compositional variations on the surface of Mars applying mixing modeling to a telescopic spectral image

    NASA Technical Reports Server (NTRS)

    Merenyi, E.; Miller, J. S.; Singer, R. B.

    1992-01-01

    The linear mixing model approach was successfully applied to data sets of various natures. In these sets, the measured radiance could be assumed to be a linear combination of radiance contributions. The present work is an attempt to analyze a spectral image of Mars with linear mixing modeling.

  3. APPLICATION OF THE SURFACE COMPLEXATION CONCEPT TO COMPLEX MINERAL ASSEMBLAGES

    EPA Science Inventory

    Two types of modeling approaches are illustrated for describing inorganic contaminant adsorption in aqueous environments: (a) the component additivity approach and (b) the generalized composite approach. Each approach is applied to simulate Zn2+ adsorption by a well-characterize...

  4. Rule-based modeling and simulations of the inner kinetochore structure.

    PubMed

    Tschernyschkow, Sergej; Herda, Sabine; Gruenert, Gerd; Döring, Volker; Görlich, Dennis; Hofmeister, Antje; Hoischen, Christian; Dittrich, Peter; Diekmann, Stephan; Ibrahim, Bashar

    2013-09-01

    Combinatorial complexity is a central problem when modeling biochemical reaction networks, since the association of a few components can give rise to a large variation of protein complexes. Available classical modeling approaches are often insufficient for the analysis of very large and complex networks in detail. Recently, we developed a new rule-based modeling approach that facilitates the analysis of spatial and combinatorially complex problems. Here, we explore for the first time how this approach can be applied to a specific biological system, the human kinetochore, which is a multi-protein complex involving over 100 proteins. Applying our freely available SRSim software to a large data set on kinetochore proteins in human cells, we construct a spatial rule-based simulation model of the human inner kinetochore. The model generates an estimation of the probability distribution of the inner kinetochore 3D architecture and we show how to analyze this distribution using information theory. In our model, the formation of a bridge between CenpA and an H3 containing nucleosome only occurs efficiently for higher protein concentration realized during S-phase but may be not in G1. Above a certain nucleosome distance the protein bridge barely formed pointing towards the importance of chromatin structure for kinetochore complex formation. We define a metric for the distance between structures that allow us to identify structural clusters. Using this modeling technique, we explore different hypothetical chromatin layouts. Applying a rule-based network analysis to the spatial kinetochore complex geometry allowed us to integrate experimental data on kinetochore proteins, suggesting a 3D model of the human inner kinetochore architecture that is governed by a combinatorial algebraic reaction network. This reaction network can serve as bridge between multiple scales of modeling. Our approach can be applied to other systems beyond kinetochores. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. A new approach for developing adjoint models

    NASA Astrophysics Data System (ADS)

    Farrell, P. E.; Funke, S. W.

    2011-12-01

    Many data assimilation algorithms rely on the availability of gradients of misfit functionals, which can be efficiently computed with adjoint models. However, the development of an adjoint model for a complex geophysical code is generally very difficult. Algorithmic differentiation (AD, also called automatic differentiation) offers one strategy for simplifying this task: it takes the abstraction that a model is a sequence of primitive instructions, each of which may be differentiated in turn. While extremely successful, this low-level abstraction runs into time-consuming difficulties when applied to the whole codebase of a model, such as differentiating through linear solves, model I/O, calls to external libraries, language features that are unsupported by the AD tool, and the use of multiple programming languages. While these difficulties can be overcome, it requires a large amount of technical expertise and an intimate familiarity with both the AD tool and the model. An alternative to applying the AD tool to the whole codebase is to assemble the discrete adjoint equations and use these to compute the necessary gradients. With this approach, the AD tool must be applied to the nonlinear assembly operators, which are typically small, self-contained units of the codebase. The disadvantage of this approach is that the assembly of the discrete adjoint equations is still very difficult to perform correctly, especially for complex multiphysics models that perform temporal integration; as it stands, this approach is as difficult and time-consuming as applying AD to the whole model. In this work, we have developed a library which greatly simplifies and automates the alternate approach of assembling the discrete adjoint equations. We propose a complementary, higher-level abstraction to that of AD: that a model is a sequence of linear solves. The developer annotates model source code with library calls that build a 'tape' of the operators involved and their dependencies, and supplies callbacks to compute the action of these operators. The library, called libadjoint, is then capable of symbolically manipulating the forward annotation to automatically assemble the adjoint equations. Libadjoint is open source, and is explicitly designed to be bolted-on to an existing discrete model. It can be applied to any discretisation, steady or time-dependent problems, and both linear and nonlinear systems. Using libadjoint has several advantages. It requires the application of an AD tool only to small pieces of code, making the use of AD far more tractable. As libadjoint derives the adjoint equations, the expertise required to develop an adjoint model is greatly diminished. One major advantage of this approach is that the model developer is freed from implementing complex checkpointing strategies for the adjoint model: libadjoint has sufficient information about the forward model to re-play the entire forward solve when necessary, and thus the checkpointing algorithm can be implemented entirely within the library itself. Examples are shown using the Fluidity/ICOM framework, a complex ocean model under development at Imperial College London.

  6. Thin Interface Asymptotics for an Energy/Entropy Approach to Phase-Field Models with Unequal Conductivities

    NASA Technical Reports Server (NTRS)

    McFadden, G. B.; Wheeler, A. A.; Anderson, D. M.

    1999-01-01

    Karma and Rapped recently developed a new sharp interface asymptotic analysis of the phase-field equations that is especially appropriate for modeling dendritic growth at low undercoolings. Their approach relieves a stringent restriction on the interface thickness that applies in the conventional asymptotic analysis, and has the added advantage that interfacial kinetic effects can also be eliminated. However, their analysis focussed on the case of equal thermal conductivities in the solid and liquid phases; when applied to a standard phase-field model with unequal conductivities, anomalous terms arise in the limiting forms of the boundary conditions for the interfacial temperature that are not present in conventional sharp-interface solidification models, as discussed further by Almgren. In this paper we apply their asymptotic methodology to a generalized phase-field model which is derived using a thermodynamically consistent approach that is based on independent entropy and internal energy gradient functionals that include double wells in both the entropy and internal energy densities. The additional degrees of freedom associated with the generalized phased-field equations can be chosen to eliminate the anomalous terms that arise for unequal conductivities.

  7. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education.

    PubMed

    Hervatis, Vasilis; Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-10-06

    Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators' decision making. A deductive case study approach was applied to develop the conceptual model. The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach.

  8. A Conceptual Analytics Model for an Outcome-Driven Quality Management Framework as Part of Professional Healthcare Education

    PubMed Central

    Loe, Alan; Barman, Linda; O'Donoghue, John; Zary, Nabil

    2015-01-01

    Background Preparing the future health care professional workforce in a changing world is a significant undertaking. Educators and other decision makers look to evidence-based knowledge to improve quality of education. Analytics, the use of data to generate insights and support decisions, have been applied successfully across numerous application domains. Health care professional education is one area where great potential is yet to be realized. Previous research of Academic and Learning analytics has mainly focused on technical issues. The focus of this study relates to its practical implementation in the setting of health care education. Objective The aim of this study is to create a conceptual model for a deeper understanding of the synthesizing process, and transforming data into information to support educators’ decision making. Methods A deductive case study approach was applied to develop the conceptual model. Results The analytics loop works both in theory and in practice. The conceptual model encompasses the underlying data, the quality indicators, and decision support for educators. Conclusions The model illustrates how a theory can be applied to a traditional data-driven analytics approach, and alongside the context- or need-driven analytics approach. PMID:27731840

  9. Fault detection and diagnosis using neural network approaches

    NASA Technical Reports Server (NTRS)

    Kramer, Mark A.

    1992-01-01

    Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.

  10. Improving Psychological Measurement: Does It Make a Difference? A Comment on Nesselroade and Molenaar (2016).

    PubMed

    Maydeu-Olivares, Alberto

    2016-01-01

    Nesselroade and Molenaar advocate the use of an idiographic filter approach. This is a fixed-effects approach, which may limit the number of individuals that can be simultaneously modeled, and it is not clear how to model the presence of subpopulations. Most important, Nesselroade and Molenaar's proposal appears to be best suited for modeling long time series on a few variables for a few individuals. Long time series are not common in psychological applications. Can it be applied to the usual longitudinal data we face? These are characterized by short time series (four to five points in time), hundreds of individuals, and dozens of variables. If so, what do we gain? Applied settings most often involve between-individual decisions. I conjecture that their approach will not outperform common, simpler, methods. However, when intraindividual decisions are involved, their approach may have an edge.

  11. Complete Hamiltonian analysis of cosmological perturbations at all orders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2016-06-01

    In this work, we present a consistent Hamiltonian analysis of cosmological perturbations at all orders. To make the procedure transparent, we consider a simple model and resolve the 'gauge-fixing' issues and extend the analysis to scalar field models and show that our approach can be applied to any order of perturbation for any first order derivative fields. In the case of Galilean scalar fields, our procedure can extract constrained relations at all orders in perturbations leading to the fact that there is no extra degrees of freedom due to the presence of higher time derivatives of the field in themore » Lagrangian. We compare and contrast our approach to the Lagrangian approach (Chen et al. [2006]) for extracting higher order correlations and show that our approach is efficient and robust and can be applied to any model of gravity and matter fields without invoking slow-roll approximation.« less

  12. Computer simulation of morphological evolution and rafting of {gamma}{prime} particles in Ni-based superalloys under applied stresses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, D.Y.; Chen, L.Q.

    Mechanical properties of Ni-based superalloys are strongly affected by the morphology, distribution, and size of {gamma}{prime} precipitates in the {gamma} matrix. The main purpose of this paper is to propose a continuum field approach for modeling the morphology and rafting kinetics of coherent precipitates under applied stresses. This approach can be used to simulate the temporal evolution of arbitrary morphologies and microstructures without any a priori assumption. Recently, the authors applied this approach to the selected variant growth in Ni-Ti alloys under applied stresses using an inhomogeneous modulus approximation. For the {gamma}{prime} precipitates in Ni-based superalloys, the eigenstrain is dilatational,more » and hence the {gamma}{prime} morphological evolution can be affected by applied stresses only when the elastic modulus is inhomogeneous. In the present work, the elastic inhomogeneity was taken into account by reformulating a sharp-interface elasticity theory developed recently by Khachaturyan et al. in terms of diffuse interfaces. Although the present work is for a {gamma}{prime} {minus} {gamma} system, this model is general in a sense that it can be applied to other alloy systems containing coherent ordered intermetallic precipitates with elastic inhomogeneity.« less

  13. Artificial Neural Networks: A New Approach to Predicting Application Behavior.

    ERIC Educational Resources Information Center

    Gonzalez, Julie M. Byers; DesJardins, Stephen L.

    2002-01-01

    Applied the technique of artificial neural networks to predict which students were likely to apply to one research university. Compared the results to the traditional analysis tool, logistic regression modeling. Found that the addition of artificial intelligence models was a useful new tool for predicting student application behavior. (EV)

  14. Estimation of Carcinogenicity using Hierarchical Clustering and Nearest Neighbor Methodologies

    EPA Science Inventory

    Previously a hierarchical clustering (HC) approach and a nearest neighbor (NN) approach were developed to model acute aquatic toxicity end points. These approaches were developed to correlate the toxicity for large, noncongeneric data sets. In this study these approaches applie...

  15. A Generic Modeling Approach to Biomass Dynamics of Sagittaria latifolia and Spartina alterniflora

    DTIC Science & Technology

    2011-01-01

    ammonium nitrate pulse of the growth and elemental composition of natural stands of Spartina alterniflora and Juncus roemerianus. American Journal of...calibration values become available. This modelling approach was applied to submersed aquatic vegetation (SAV) also (Best and Boyd 2008). The approach is... the models. The DVS is dimensionless and its value increases gradually within a growing season. The development rate (DVR) has the dimension d-1

  16. Modeling river total bed material load discharge using artificial intelligence approaches (based on conceptual inputs)

    NASA Astrophysics Data System (ADS)

    Roushangar, Kiyoumars; Mehrabani, Fatemeh Vojoudi; Shiri, Jalal

    2014-06-01

    This study presents Artificial Intelligence (AI)-based modeling of total bed material load through developing the accuracy level of the predictions of traditional models. Gene expression programming (GEP) and adaptive neuro-fuzzy inference system (ANFIS)-based models were developed and validated for estimations. Sediment data from Qotur River (Northwestern Iran) were used for developing and validation of the applied techniques. In order to assess the applied techniques in relation to traditional models, stream power-based and shear stress-based physical models were also applied in the studied case. The obtained results reveal that developed AI-based models using minimum number of dominant factors, give more accurate results than the other applied models. Nonetheless, it was revealed that k-fold test is a practical but high-cost technique for complete scanning of applied data and avoiding the over-fitting.

  17. Engine Data Interpretation System (EDIS)

    NASA Technical Reports Server (NTRS)

    Cost, Thomas L.; Hofmann, Martin O.

    1990-01-01

    A prototype of an expert system was developed which applies qualitative or model-based reasoning to the task of post-test analysis and diagnosis of data resulting from a rocket engine firing. A combined component-based and process theory approach is adopted as the basis for system modeling. Such an approach provides a framework for explaining both normal and deviant system behavior in terms of individual component functionality. The diagnosis function is applied to digitized sensor time-histories generated during engine firings. The generic system is applicable to any liquid rocket engine but was adapted specifically in this work to the Space Shuttle Main Engine (SSME). The system is applied to idealized data resulting from turbomachinery malfunction in the SSME.

  18. Multilevel joint competing risk models

    NASA Astrophysics Data System (ADS)

    Karunarathna, G. H. S.; Sooriyarachchi, M. R.

    2017-09-01

    Joint modeling approaches are often encountered for different outcomes of competing risk time to event and count in many biomedical and epidemiology studies in the presence of cluster effect. Hospital length of stay (LOS) has been the widely used outcome measure in hospital utilization due to the benchmark measurement for measuring multiple terminations such as discharge, transferred, dead and patients who have not completed the event of interest at the follow up period (censored) during hospitalizations. Competing risk models provide a method of addressing such multiple destinations since classical time to event models yield biased results when there are multiple events. In this study, the concept of joint modeling has been applied to the dengue epidemiology in Sri Lanka, 2006-2008 to assess the relationship between different outcomes of LOS and platelet count of dengue patients with the district cluster effect. Two key approaches have been applied to build up the joint scenario. In the first approach, modeling each competing risk separately using the binary logistic model, treating all other events as censored under the multilevel discrete time to event model, while the platelet counts are assumed to follow a lognormal regression model. The second approach is based on the endogeneity effect in the multilevel competing risks and count model. Model parameters were estimated using maximum likelihood based on the Laplace approximation. Moreover, the study reveals that joint modeling approach yield more precise results compared to fitting two separate univariate models, in terms of AIC (Akaike Information Criterion).

  19. Learning Probabilistic Logic Models from Probabilistic Examples

    PubMed Central

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2009-01-01

    Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348

  20. Learning Probabilistic Logic Models from Probabilistic Examples.

    PubMed

    Chen, Jianzhong; Muggleton, Stephen; Santos, José

    2008-10-01

    We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.

  1. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  2. A moni-modelling approach to manage groundwater risk to pesticide leaching at regional scale.

    PubMed

    Di Guardo, Andrea; Finizio, Antonio

    2016-03-01

    Historically, the approach used to manage risk of chemical contamination of water bodies is based on the use of monitoring programmes, which provide a snapshot of the presence/absence of chemicals in water bodies. Monitoring is required in the current EU regulations, such as the Water Framework Directive (WFD), as a tool to record temporal variation in the chemical status of water bodies. More recently, a number of models have been developed and used to forecast chemical contamination of water bodies. These models combine information of chemical properties, their use, and environmental scenarios. Both approaches are useful for risk assessors in decision processes. However, in our opinion, both show flaws and strengths when taken alone. This paper proposes an integrated approach (moni-modelling approach) where monitoring data and modelling simulations work together in order to provide a common decision framework for the risk assessor. This approach would be very useful, particularly for the risk management of pesticides at a territorial level. It fulfils the requirement of the recent Sustainable Use of Pesticides Directive. In fact, the moni-modelling approach could be used to identify sensible areas where implement mitigation measures or limitation of use of pesticides, but even to effectively re-design future monitoring networks or to better calibrate the pedo-climatic input data for the environmental fate models. A case study is presented, where the moni-modelling approach is applied in Lombardy region (North of Italy) to identify groundwater vulnerable areas to pesticides. The approach has been applied to six active substances with different leaching behaviour, in order to highlight the advantages in using the proposed methodology. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Incorporating time-delays in S-System model for reverse engineering genetic networks.

    PubMed

    Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan

    2013-06-18

    In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.

  4. A simple method for EEG guided transcranial electrical stimulation without models

    NASA Astrophysics Data System (ADS)

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q.; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    Objective. There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. Approach. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a ‘gold standard’ numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Main results. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Significance. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  5. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  6. Comprehensive Approach to Verification and Validation of CFD Simulations Applied to Backward Facing Step-Application of CFD Uncertainty Analysis

    NASA Technical Reports Server (NTRS)

    Groves, Curtis E.; LLie, Marcel; Shallhorn, Paul A.

    2012-01-01

    There are inherent uncertainties and errors associated with using Computational Fluid Dynamics (CFD) to predict the flow field and there is no standard method for evaluating uncertainty in the CFD community. This paper describes an approach to -validate the . uncertainty in using CFD. The method will use the state of the art uncertainty analysis applying different turbulence niodels and draw conclusions on which models provide the least uncertainty and which models most accurately predict the flow of a backward facing step.

  7. Quantum description of light propagation in generalized media

    NASA Astrophysics Data System (ADS)

    Häyrynen, Teppo; Oksanen, Jani

    2016-02-01

    Linear quantum input-output relation based models are widely applied to describe the light propagation in a lossy medium. The details of the interaction and the associated added noise depend on whether the device is configured to operate as an amplifier or an attenuator. Using the traveling wave (TW) approach, we generalize the linear material model to simultaneously account for both the emission and absorption processes and to have point-wise defined noise field statistics and intensity dependent interaction strengths. Thus, our approach describes the quantum input-output relations of linear media with net attenuation, amplification or transparency without pre-selection of the operation point. The TW approach is then applied to investigate materials at thermal equilibrium, inverted materials, the transparency limit where losses are compensated, and the saturating amplifiers. We also apply the approach to investigate media in nonuniform states which can be e.g. consequences of a temperature gradient over the medium or a position dependent inversion of the amplifier. Furthermore, by using the generalized model we investigate devices with intensity dependent interactions and show how an initial thermal field transforms to a field having coherent statistics due to gain saturation.

  8. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    PubMed

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  9. Comparative analysis of neural network and regression based condition monitoring approaches for wind turbine fault detection

    NASA Astrophysics Data System (ADS)

    Schlechtingen, Meik; Ferreira Santos, Ilmar

    2011-07-01

    This paper presents the research results of a comparison of three different model based approaches for wind turbine fault detection in online SCADA data, by applying developed models to five real measured faults and anomalies. The regression based model as the simplest approach to build a normal behavior model is compared to two artificial neural network based approaches, which are a full signal reconstruction and an autoregressive normal behavior model. Based on a real time series containing two generator bearing damages the capabilities of identifying the incipient fault prior to the actual failure are investigated. The period after the first bearing damage is used to develop the three normal behavior models. The developed or trained models are used to investigate how the second damage manifests in the prediction error. Furthermore the full signal reconstruction and the autoregressive approach are applied to further real time series containing gearbox bearing damages and stator temperature anomalies. The comparison revealed all three models being capable of detecting incipient faults. However, they differ in the effort required for model development and the remaining operational time after first indication of damage. The general nonlinear neural network approaches outperform the regression model. The remaining seasonality in the regression model prediction error makes it difficult to detect abnormality and leads to increased alarm levels and thus a shorter remaining operational period. For the bearing damages and the stator anomalies under investigation the full signal reconstruction neural network gave the best fault visibility and thus led to the highest confidence level.

  10. Predicting performance of polymer-bonded Terfenol-D composites under different magnetic fields

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2009-09-01

    Considering demagnetization effect, the model used to calculate the magnetostriction of the single particle under the applied field is first created. Based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate the average magnetostriction of the composites under any applied field, as well as the saturation, is studied by treating the magnetostriction particulate as an eigenstrain. The results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with an increase of particle aspect and particle volume fraction, and a decrease of Young's modulus of the matrix. The influence of an applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect. Experiments were done to verify the effectiveness of the model, the results of which indicate that the model only can provide approximate results.

  11. Associations between the Classroom Learning Environment and Student Engagement in Learning 1: A Rasch Model Approach

    ERIC Educational Resources Information Center

    Cavanagh, Rob

    2012-01-01

    This report is about one of two phases in an investigation into associations between student engagement in classroom learning and the classroom learning environment. Both phases applied the same instrumentation to the same sample. The difference between the phases was in the measurement approach applied. This report is about application of the…

  12. Estimation of adsorption isotherm and mass transfer parameters in protein chromatography using artificial neural networks.

    PubMed

    Wang, Gang; Briskot, Till; Hahn, Tobias; Baumann, Pascal; Hubbuch, Jürgen

    2017-03-03

    Mechanistic modeling has been repeatedly successfully applied in process development and control of protein chromatography. For each combination of adsorbate and adsorbent, the mechanistic models have to be calibrated. Some of the model parameters, such as system characteristics, can be determined reliably by applying well-established experimental methods, whereas others cannot be measured directly. In common practice of protein chromatography modeling, these parameters are identified by applying time-consuming methods such as frontal analysis combined with gradient experiments, curve-fitting, or combined Yamamoto approach. For new components in the chromatographic system, these traditional calibration approaches require to be conducted repeatedly. In the presented work, a novel method for the calibration of mechanistic models based on artificial neural network (ANN) modeling was applied. An in silico screening of possible model parameter combinations was performed to generate learning material for the ANN model. Once the ANN model was trained to recognize chromatograms and to respond with the corresponding model parameter set, it was used to calibrate the mechanistic model from measured chromatograms. The ANN model's capability of parameter estimation was tested by predicting gradient elution chromatograms. The time-consuming model parameter estimation process itself could be reduced down to milliseconds. The functionality of the method was successfully demonstrated in a study with the calibration of the transport-dispersive model (TDM) and the stoichiometric displacement model (SDM) for a protein mixture. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  13. Carrying BioMath education in a Leaky Bucket.

    PubMed

    Powell, James A; Kohler, Brynja R; Haefner, James W; Bodily, Janice

    2012-09-01

    In this paper, we describe a project-based mathematical lab implemented in our Applied Mathematics in Biology course. The Leaky Bucket Lab allows students to parameterize and test Torricelli's law and develop and compare their own alternative models to describe the dynamics of water draining from perforated containers. In the context of this lab students build facility in a variety of applied biomathematical tools and gain confidence in applying these tools in data-driven environments. We survey analytic approaches developed by students to illustrate the creativity this encourages as well as prepare other instructors to scaffold the student learning experience. Pedagogical results based on classroom videography support the notion that the Biology-Applied Math Instructional Model, the teaching framework encompassing the lab, is effective in encouraging and maintaining high-level cognition among students. Research-based pedagogical approaches that support the lab are discussed.

  14. Why did the bear cross the road? Comparing the performance of multiple resistance surfaces and connectivity modeling methods

    Treesearch

    Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth

    2014-01-01

    There have been few assessments of the performance of alternative resistance surfaces, and little is known about how connectivity modeling approaches differ in their ability to predict organism movements. In this paper, we evaluate the performance of four connectivity modeling approaches applied to two resistance surfaces in predicting the locations of highway...

  15. An Assessment of the Nonparametric Approach for Evaluating the Fit of Item Response Models

    ERIC Educational Resources Information Center

    Liang, Tie; Wells, Craig S.; Hambleton, Ronald K.

    2014-01-01

    As item response theory has been more widely applied, investigating the fit of a parametric model becomes an important part of the measurement process. There is a lack of promising solutions to the detection of model misfit in IRT. Douglas and Cohen introduced a general nonparametric approach, RISE (Root Integrated Squared Error), for detecting…

  16. Online Synchronous vs. Asynchronous Software Training through the Behavioral Modeling Approach: A Longitudinal Field Experiment

    ERIC Educational Resources Information Center

    Chen, Charlie C.; Shaw, Ruey-shiang

    2006-01-01

    The continued and increasing use of online training raises the question of whether the most effective training methods applied in live instruction will carry over to different online environments in the long run. Behavior Modeling (BM) approach--teaching through demonstration--has been proven as the most effective approach in a face-to-face (F2F)…

  17. Using VCL as an Aspect-Oriented Approach to Requirements Modelling

    NASA Astrophysics Data System (ADS)

    Amálio, Nuno; Kelsen, Pierre; Ma, Qin; Glodt, Christian

    Software systems are becoming larger and more complex. By tackling the modularisation of crosscutting concerns, aspect orientation draws attention to modularity as a means to address the problems of scalability, complexity and evolution in software systems development. Aspect-oriented modelling (AOM) applies aspect-orientation to the construction of models. Most existing AOM approaches are designed without a formal semantics, and use multi-view partial descriptions of behaviour. This paper presents an AOM approach based on the Visual Contract Language (VCL): a visual language for abstract and precise modelling, designed with a formal semantics, and comprising a novel approach to visual behavioural modelling based on design by contract where behavioural descriptions are total. By applying VCL to a large case study of a car-crash crisis management system, the paper demonstrates how modularity of VCL's constructs, at different levels of granularity, help to tackle complexity. In particular, it shows how VCL's package construct and its associated composition mechanisms are key in supporting separation of concerns, coarse-grained problem decomposition and aspect-orientation. The case study's modelling solution has a clear and well-defined modular structure; the backbone of this structure is a collection of packages encapsulating local solutions to concerns.

  18. Software Reliability 2002

    NASA Technical Reports Server (NTRS)

    Wallace, Dolores R.

    2003-01-01

    In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.

  19. Spreadsheet analysis of stability and meta-stability of low-dimensional magnetic particles using the Ising approach

    NASA Astrophysics Data System (ADS)

    Ehrmann, Andrea; Blachowicz, Tomasz; Zghidi, Hafed

    2015-05-01

    Modelling hysteresis behaviour, as it can be found in a broad variety of dynamical systems, can be performed in different ways. An elementary approach, applied for a set of elementary cells, which uses only two possible states per cell, is the Ising model. While such Ising models allow for a simulation of many systems with sufficient accuracy, they nevertheless depict some typical features which must be taken into account with proper care, such as meta-stability or the externally applied field sweeping speed. This paper gives a general overview of recent results from Ising models from the perspective of a didactic model, based on a 2D spreadsheet analysis, which can be used also for solving general scientific problems where direct next-neighbour interactions take place.

  20. Facultative Stabilization Pond: Measuring Biological Oxygen Demand using Mathematical Approaches

    NASA Astrophysics Data System (ADS)

    Wira S, Ihsan; Sunarsih, Sunarsih

    2018-02-01

    Pollution is a man-made phenomenon. Some pollutants which discharged directly to the environment could create serious pollution problems. Untreated wastewater will cause contamination and even pollution on the water body. Biological Oxygen Demand (BOD) is the amount of oxygen required for the oxidation by bacteria. The higher the BOD concentration, the greater the organic matter would be. The purpose of this study was to predict the value of BOD contained in wastewater. Mathematical modeling methods were chosen in this study to depict and predict the BOD values contained in facultative wastewater stabilization ponds. Measurements of sampling data were carried out to validate the model. The results of this study indicated that a mathematical approach can be applied to predict the BOD contained in the facultative wastewater stabilization ponds. The model was validated using Absolute Means Error with 10% tolerance limit, and AME for model was 7.38% (< 10%), so the model is valid. Furthermore, a mathematical approach can also be applied to illustrate and predict the contents of wastewater.

  1. Constrained reduced-order models based on proper orthogonal decomposition

    DOE PAGES

    Reddy, Sohail R.; Freno, Brian Andrew; Cizmas, Paul G. A.; ...

    2017-04-09

    A novel approach is presented to constrain reduced-order models (ROM) based on proper orthogonal decomposition (POD). The Karush–Kuhn–Tucker (KKT) conditions were applied to the traditional reduced-order model to constrain the solution to user-defined bounds. The constrained reduced-order model (C-ROM) was applied and validated against the analytical solution to the first-order wave equation. C-ROM was also applied to the analysis of fluidized beds. Lastly, it was shown that the ROM and C-ROM produced accurate results and that C-ROM was less sensitive to error propagation through time than the ROM.

  2. A Bayesian hierarchical latent trait model for estimating rater bias and reliability in large-scale performance assessment

    PubMed Central

    2018-01-01

    We propose a novel approach to modelling rater effects in scoring-based assessment. The approach is based on a Bayesian hierarchical model and simulations from the posterior distribution. We apply it to large-scale essay assessment data over a period of 5 years. Empirical results suggest that the model provides a good fit for both the total scores and when applied to individual rubrics. We estimate the median impact of rater effects on the final grade to be ± 2 points on a 50 point scale, while 10% of essays would receive a score at least ± 5 different from their actual quality. Most of the impact is due to rater unreliability, not rater bias. PMID:29614129

  3. A fuzzy logic approach to modeling the underground economy in Taiwan

    NASA Astrophysics Data System (ADS)

    Yu, Tiffany Hui-Kuang; Wang, David Han-Min; Chen, Su-Jane

    2006-04-01

    The size of the ‘underground economy’ (UE) is valuable information in the formulation of macroeconomic and fiscal policy. This study applies fuzzy set theory and fuzzy logic to model Taiwan's UE over the period from 1960 to 2003. Two major factors affecting the size of the UE, the effective tax rate and the degree of government regulation, are used. The size of Taiwan's UE is scaled and compared with those of other models. Although our approach yields different estimates, similar patterns and leading are exhibited throughout the period. The advantage of applying fuzzy logic is twofold. First, it can avoid the complex calculations in conventional econometric models. Second, fuzzy rules with linguistic terms are easy for human to understand.

  4. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  5. The Effects of Sand Sediment Volume Heterogeneities on Sound Propagation and Scattering

    DTIC Science & Technology

    2011-09-01

    previously developed at APL- UW for the study of high-frequency acoustics . These models include perturbation models applied to scattering from the...shell shapes (Figure 1). The acoustic modeling to this point has utilized Ivakin’s unified approach to volume and roughness scattering [3...sediments: A modeling approach and application to a shelly sand-mud environment,” in the Proceeding of the European Conference on Underwater Acoustics

  6. Multiphase modeling of geologic carbon sequestration in saline aquifers.

    PubMed

    Bandilla, Karl W; Celia, Michael A; Birkholzer, Jens T; Cihan, Abdullah; Leister, Evan C

    2015-01-01

    Geologic carbon sequestration (GCS) is being considered as a climate change mitigation option in many future energy scenarios. Mathematical modeling is routinely used to predict subsurface CO2 and resident brine migration for the design of injection operations, to demonstrate the permanence of CO2 storage, and to show that other subsurface resources will not be degraded. Many processes impact the migration of CO2 and brine, including multiphase flow dynamics, geochemistry, and geomechanics, along with the spatial distribution of parameters such as porosity and permeability. In this article, we review a set of multiphase modeling approaches with different levels of conceptual complexity that have been used to model GCS. Model complexity ranges from coupled multiprocess models to simplified vertical equilibrium (VE) models and macroscopic invasion percolation models. The goal of this article is to give a framework of conceptual model complexity, and to show the types of modeling approaches that have been used to address specific GCS questions. Application of the modeling approaches is shown using five ongoing or proposed CO2 injection sites. For the selected sites, the majority of GCS models follow a simplified multiphase approach, especially for questions related to injection and local-scale heterogeneity. Coupled multiprocess models are only applied in one case where geomechanics have a strong impact on the flow. Owing to their computational efficiency, VE models tend to be applied at large scales. A macroscopic invasion percolation approach was used to predict the CO2 migration at one site to examine details of CO2 migration under the caprock. © 2015, National Ground Water Association.

  7. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment-scale water management.

    PubMed

    Refsgaard, A; Jacobsen, T; Jacobsen, B; Ørum, J-E

    2007-01-01

    The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied in river basin management. Point sources (e.g. sewage treatment plant discharges) and distributed diffuse sources (nitrate leakage) are included to provide a modelling tool capable of simulating pollution transport from source to recipient to analyse the effects of specific, localized basin water management plans. The paper also includes a land rent modelling approach which can be used to choose the most cost-effective measures and the location of these measures. As a forerunner to the use of basin-scale models in WFD basin water management plans this project demonstrates the potential and limitations of comprehensive, integrated modelling tools.

  8. United States‐Mexican border watershed assessment: Modeling nonpoint source pollution in Ambos Nogales

    USGS Publications Warehouse

    Norman, Laura M.

    2007-01-01

    Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.

  9. Differential equation models for sharp threshold dynamics.

    PubMed

    Schramm, Harrison C; Dimitrov, Nedialko B

    2014-01-01

    We develop an extension to differential equation models of dynamical systems to allow us to analyze probabilistic threshold dynamics that fundamentally and globally change system behavior. We apply our novel modeling approach to two cases of interest: a model of infectious disease modified for malware where a detection event drastically changes dynamics by introducing a new class in competition with the original infection; and the Lanchester model of armed conflict, where the loss of a key capability drastically changes the effectiveness of one of the sides. We derive and demonstrate a step-by-step, repeatable method for applying our novel modeling approach to an arbitrary system, and we compare the resulting differential equations to simulations of the system's random progression. Our work leads to a simple and easily implemented method for analyzing probabilistic threshold dynamics using differential equations. Published by Elsevier Inc.

  10. Evaluation of different approaches to modeling the second-order ionospheric delay on GPS measurements

    NASA Astrophysics Data System (ADS)

    Garcia-Fernandez, M.; Desai, S. D.; Butala, M. D.; Komjathy, A.

    2013-12-01

    This work evaluates various approaches to compute the second order ionospheric correction (SOIC) to Global Positioning System (GPS) measurements. When estimating the reference frame using GPS, applying this correction is known to primarily affect the realization of the origin of the Earth's reference frame along the spin axis (Z coordinate). Therefore, the Z translation relative to the International Terrestrial Reference Frame 2008 is used as the metric to evaluate various published approaches to determining the slant total electron content (TEC) for the SOIC: getting the slant TEC from GPS measurements, and using the vertical total electron content (TEC) given by a Global Ionospheric Model (GIM) to transform it to slant TEC via a mapping function. All of these approaches agree to 1 mm if the ionospheric shell height needed in GIM-based approaches is set to 600 km. The commonly used shell height of 450 km introduces an offset of 1 to 2 mm. When the SOIC is not applied, the Z axis translation can be reasonably modeled with a ratio of +0.23 mm/TEC units of the daily median GIM vertical TEC. Also, precise point positioning (PPP) solutions (positions and clocks) determined with and without SOIC differ by less than 1 mm only if they are based upon GPS orbit and clock solutions that have consistently applied or not applied the correction, respectively. Otherwise, deviations of few millimeters in the north component of the PPP solutions can arise due to inconsistencies with the satellite orbit and clock products, and those deviations exhibit a dependency on solar cycle conditions.

  11. Application of iterative robust model-based optimal experimental design for the calibration of biocatalytic models.

    PubMed

    Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar

    2017-09-01

    The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.

  12. A simple method for EEG guided transcranial electrical stimulation without models.

    PubMed

    Cancelli, Andrea; Cottone, Carlo; Tecchio, Franca; Truong, Dennis Q; Dmochowski, Jacek; Bikson, Marom

    2016-06-01

    There is longstanding interest in using EEG measurements to inform transcranial Electrical Stimulation (tES) but adoption is lacking because users need a simple and adaptable recipe. The conventional approach is to use anatomical head-models for both source localization (the EEG inverse problem) and current flow modeling (the tES forward model), but this approach is computationally demanding, requires an anatomical MRI, and strict assumptions about the target brain regions. We evaluate techniques whereby tES dose is derived from EEG without the need for an anatomical head model, target assumptions, difficult case-by-case conjecture, or many stimulation electrodes. We developed a simple two-step approach to EEG-guided tES that based on the topography of the EEG: (1) selects locations to be used for stimulation; (2) determines current applied to each electrode. Each step is performed based solely on the EEG with no need for head models or source localization. Cortical dipoles represent idealized brain targets. EEG-guided tES strategies are verified using a finite element method simulation of the EEG generated by a dipole, oriented either tangential or radial to the scalp surface, and then simulating the tES-generated electric field produced by each model-free technique. These model-free approaches are compared to a 'gold standard' numerically optimized dose of tES that assumes perfect understanding of the dipole location and head anatomy. We vary the number of electrodes from a few to over three hundred, with focality or intensity as optimization criterion. Model-free approaches evaluated include (1) voltage-to-voltage, (2) voltage-to-current; (3) Laplacian; and two Ad-Hoc techniques (4) dipole sink-to-sink; and (5) sink to concentric. Our results demonstrate that simple ad hoc approaches can achieve reasonable targeting for the case of a cortical dipole, remarkably with only 2-8 electrodes and no need for a model of the head. Our approach is verified directly only for a theoretically localized source, but may be potentially applied to an arbitrary EEG topography. For its simplicity and linearity, our recipe for model-free EEG guided tES lends itself to broad adoption and can be applied to static (tDCS), time-variant (e.g., tACS, tRNS, tPCS), or closed-loop tES.

  13. Metabolic Network Modeling of Microbial Interactions in Natural and Engineered Environmental Systems

    PubMed Central

    Perez-Garcia, Octavio; Lear, Gavin; Singhal, Naresh

    2016-01-01

    We review approaches to characterize metabolic interactions within microbial communities using Stoichiometric Metabolic Network (SMN) models for applications in environmental and industrial biotechnology. SMN models are computational tools used to evaluate the metabolic engineering potential of various organisms. They have successfully been applied to design and optimize the microbial production of antibiotics, alcohols and amino acids by single strains. To date however, such models have been rarely applied to analyze and control the metabolism of more complex microbial communities. This is largely attributed to the diversity of microbial community functions, metabolisms, and interactions. Here, we firstly review different types of microbial interaction and describe their relevance for natural and engineered environmental processes. Next, we provide a general description of the essential methods of the SMN modeling workflow including the steps of network reconstruction, simulation through Flux Balance Analysis (FBA), experimental data gathering, and model calibration. Then we broadly describe and compare four approaches to model microbial interactions using metabolic networks, i.e., (i) lumped networks, (ii) compartment per guild networks, (iii) bi-level optimization simulations, and (iv) dynamic-SMN methods. These approaches can be used to integrate and analyze diverse microbial physiology, ecology and molecular community data. All of them (except the lumped approach) are suitable for incorporating species abundance data but so far they have been used only to model simple communities of two to eight different species. Interactions based on substrate exchange and competition can be directly modeled using the above approaches. However, interactions based on metabolic feedbacks, such as product inhibition and synthropy require extensions to current models, incorporating gene regulation and compounding accumulation mechanisms. SMN models of microbial interactions can be used to analyze complex “omics” data and to infer and optimize metabolic processes. Thereby, SMN models are suitable to capitalize on advances in high-throughput molecular and metabolic data generation. SMN models are starting to be applied to describe microbial interactions during wastewater treatment, in-situ bioremediation, microalgae blooms methanogenic fermentation, and bioplastic production. Despite their current challenges, we envisage that SMN models have future potential for the design and development of novel growth media, biochemical pathways and synthetic microbial associations. PMID:27242701

  14. Dynamic modeling and parameter estimation of a radial and loop type distribution system network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Qui; Heng Chen; Girgis, A.A.

    1993-05-01

    This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.

  15. Flipped Classroom Adapted to the ARCS Model of Motivation and Applied to a Physics Course

    ERIC Educational Resources Information Center

    Asiksoy, Gülsüm; Özdamli, Fezile

    2016-01-01

    This study aims to determine the effect on the achievement, motivation and self-sufficiency of students of the flipped classroom approach adapted to Keller's ARCS (Attention, Relevance, Confidence and Satisfaction) motivation model and applied to a physics course. The study involved 66 students divided into two classes of a physics course. The…

  16. Fractional Derivative Models for Ultrasonic Characterization of Polymer and Breast Tissue Viscoelasticity

    PubMed Central

    Coussot, Cecile; Kalyanam, Sureshkumar; Yapp, Rebecca; Insana, Michael F.

    2009-01-01

    The viscoelastic response of hydropolymers, which include glandular breast tissues, may be accurately characterized for some applications with as few as 3 rheological parameters by applying the Kelvin-Voigt fractional derivative (KVFD) modeling approach. We describe a technique for ultrasonic imaging of KVFD parameters in media undergoing unconfined, quasi-static, uniaxial compression. We analyze the KVFD parameter values in simulated and experimental echo data acquired from phantoms and show that the KVFD parameters may concisely characterize the viscoelastic properties of hydropolymers. We then interpret the KVFD parameter values for normal and cancerous breast tissues and hypothesize that this modeling approach may ultimately be applied to tumor differentiation. PMID:19406700

  17. Quality Concerns in Technical Education in India: A Quantifiable Quality Enabled Model

    ERIC Educational Resources Information Center

    Gambhir, Victor; Wadhwa, N. C.; Grover, Sandeep

    2016-01-01

    Purpose: The paper aims to discuss current Technical Education scenarios in India. It proposes modelling the factors affecting quality in a technical institute and then applying a suitable technique for assessment, comparison and ranking. Design/methodology/approach: The paper chose graph theoretic approach for quantification of quality-enabled…

  18. Gender and Infertility: A Relational Approach To Counseling Women.

    ERIC Educational Resources Information Center

    Gibson, Donna M.; Myers, Jane E.

    2000-01-01

    The Relational Model (J. V. Jordan, 1995) of women's development is a theory that explains women's development in a context of relationships, specifically relationships that promote growth for self and others. This model is applied to counseling women who are experiencing infertility, and a case presentation is provided to illustrate the approach.…

  19. Breast tumor malignancy modelling using evolutionary neural logic networks.

    PubMed

    Tsakonas, Athanasios; Dounias, Georgios; Panagi, Georgia; Panourgias, Evangelia

    2006-01-01

    The present work proposes a computer assisted methodology for the effective modelling of the diagnostic decision for breast tumor malignancy. The suggested approach is based on innovative hybrid computational intelligence algorithms properly applied in related cytological data contained in past medical records. The experimental data used in this study were gathered in the early 1990s in the University of Wisconsin, based in post diagnostic cytological observations performed by expert medical staff. Data were properly encoded in a computer database and accordingly, various alternative modelling techniques were applied on them, in an attempt to form diagnostic models. Previous methods included standard optimisation techniques, as well as artificial intelligence approaches, in a way that a variety of related publications exists in modern literature on the subject. In this report, a hybrid computational intelligence approach is suggested, which effectively combines modern mathematical logic principles, neural computation and genetic programming in an effective manner. The approach proves promising either in terms of diagnostic accuracy and generalization capabilities, or in terms of comprehensibility and practical importance for the related medical staff.

  20. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, J.; Polly, B.; Collis, J.

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define 'explicit' input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  1. Evaluation of Automated Model Calibration Techniques for Residential Building Energy Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    and Ben Polly, Joseph Robertson; Polly, Ben; Collis, Jon

    2013-09-01

    This simulation study adapts and applies the general framework described in BESTEST-EX (Judkoff et al 2010) for self-testing residential building energy model calibration methods. BEopt/DOE-2.2 is used to evaluate four mathematical calibration methods in the context of monthly, daily, and hourly synthetic utility data for a 1960's-era existing home in a cooling-dominated climate. The home's model inputs are assigned probability distributions representing uncertainty ranges, random selections are made from the uncertainty ranges to define "explicit" input values, and synthetic utility billing data are generated using the explicit input values. The four calibration methods evaluated in this study are: an ASHRAEmore » 1051-RP-based approach (Reddy and Maor 2006), a simplified simulated annealing optimization approach, a regression metamodeling optimization approach, and a simple output ratio calibration approach. The calibration methods are evaluated for monthly, daily, and hourly cases; various retrofit measures are applied to the calibrated models and the methods are evaluated based on the accuracy of predicted savings, computational cost, repeatability, automation, and ease of implementation.« less

  2. Bond Graph Modeling of Chemiosmotic Biomolecular Energy Transduction.

    PubMed

    Gawthrop, Peter J

    2017-04-01

    Engineering systems modeling and analysis based on the bond graph approach has been applied to biomolecular systems. In this context, the notion of a Faraday-equivalent chemical potential is introduced which allows chemical potential to be expressed in an analogous manner to electrical volts thus allowing engineering intuition to be applied to biomolecular systems. Redox reactions, and their representation by half-reactions, are key components of biological systems which involve both electrical and chemical domains. A bond graph interpretation of redox reactions is given which combines bond graphs with the Faraday-equivalent chemical potential. This approach is particularly relevant when the biomolecular system implements chemoelectrical transduction - for example chemiosmosis within the key metabolic pathway of mitochondria: oxidative phosphorylation. An alternative way of implementing computational modularity using bond graphs is introduced and used to give a physically based model of the mitochondrial electron transport chain To illustrate the overall approach, this model is analyzed using the Faraday-equivalent chemical potential approach and engineering intuition is used to guide affinity equalisation: a energy based analysis of the mitochondrial electron transport chain.

  3. Hurricane Sandy Economic Impacts Assessment: A Computable General Equilibrium Approach and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boero, Riccardo; Edwards, Brian Keith

    Economists use computable general equilibrium (CGE) models to assess how economies react and self-organize after changes in policies, technology, and other exogenous shocks. CGE models are equation-based, empirically calibrated, and inspired by Neoclassical economic theory. The focus of this work was to validate the National Infrastructure Simulation and Analysis Center (NISAC) CGE model and apply it to the problem of assessing the economic impacts of severe events. We used the 2012 Hurricane Sandy event as our validation case. In particular, this work first introduces the model and then describes the validation approach and the empirical data available for studying themore » event of focus. Shocks to the model are then formalized and applied. Finally, model results and limitations are presented and discussed, pointing out both the model degree of accuracy and the assessed total damage caused by Hurricane Sandy.« less

  4. A Novel Approach to Adaptive Flow Separation Control

    DTIC Science & Technology

    2016-09-03

    particular, it considers control of flow separation over a NACA-0025 airfoil using microjet actuators and develops Adaptive Sampling Based Model...Predictive Control ( Adaptive SBMPC), a novel approach to Nonlinear Model Predictive Control that applies the Minimal Resource Allocation Network...Distribution Unlimited UU UU UU UU 03-09-2016 1-May-2013 30-Apr-2016 Final Report: A Novel Approach to Adaptive Flow Separation Control The views, opinions

  5. A Log Logistic Survival Model Applied to Hypobaric Decompression Sickness

    NASA Technical Reports Server (NTRS)

    Conkin, Johnny

    2001-01-01

    Decompression sickness (DCS) is a complex, multivariable problem. A mathematical description or model of the likelihood of DCS requires a large amount of quality research data, ideas on how to define a decompression dose using physical and physiological variables, and an appropriate analytical approach. It also requires a high-performance computer with specialized software. I have used published DCS data to develop my decompression doses, which are variants of equilibrium expressions for evolved gas plus other explanatory variables. My analytical approach is survival analysis, where the time of DCS occurrence is modeled. My conclusions can be applied to simple hypobaric decompressions - ascents lasting from 5 to 30 minutes - and, after minutes to hours, to denitrogenation (prebreathing). They are also applicable to long or short exposures, and can be used whether the sufferer of DCS is at rest or exercising at altitude. Ultimately I would like my models to be applied to astronauts to reduce the risk of DCS during spacewalks, as well as to future spaceflight crews on the Moon and Mars.

  6. Hydrological modelling in forested systems

    EPA Science Inventory

    This chapter provides a brief overview of forest hydrology modelling approaches for answering important global research and management questions. Many hundreds of hydrological models have been applied globally across multiple decades to represent and predict forest hydrological p...

  7. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  8. A new decision sciences for complex systems.

    PubMed

    Lempert, Robert J

    2002-05-14

    Models of complex systems can capture much useful information but can be difficult to apply to real-world decision-making because the type of information they contain is often inconsistent with that required for traditional decision analysis. New approaches, which use inductive reasoning over large ensembles of computational experiments, now make possible systematic comparison of alternative policy options using models of complex systems. This article describes Computer-Assisted Reasoning, an approach to decision-making under conditions of deep uncertainty that is ideally suited to applying complex systems to policy analysis. The article demonstrates the approach on the policy problem of global climate change, with a particular focus on the role of technology policies in a robust, adaptive strategy for greenhouse gas abatement.

  9. A Novel Ontology Approach to Support Design for Reliability considering Environmental Effects

    PubMed Central

    Sun, Bo; Li, Yu; Ye, Tianyuan

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis. PMID:25821857

  10. A novel ontology approach to support design for reliability considering environmental effects.

    PubMed

    Sun, Bo; Li, Yu; Ye, Tianyuan; Ren, Yi

    2015-01-01

    Environmental effects are not considered sufficiently in product design. Reliability problems caused by environmental effects are very prominent. This paper proposes a method to apply ontology approach in product design. During product reliability design and analysis, environmental effects knowledge reusing is achieved. First, the relationship of environmental effects and product reliability is analyzed. Then environmental effects ontology to describe environmental effects domain knowledge is designed. Related concepts of environmental effects are formally defined by using the ontology approach. This model can be applied to arrange environmental effects knowledge in different environments. Finally, rubber seals used in the subhumid acid rain environment are taken as an example to illustrate ontological model application on reliability design and analysis.

  11. Efficient fuzzy Bayesian inference algorithms for incorporating expert knowledge in parameter estimation

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad

    2016-05-01

    Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.

  12. Nitrous oxide emissions from agricultural landscapes: quantification tools, policy development, and opportunities for improved management

    NASA Astrophysics Data System (ADS)

    Tonitto, C.; Gurwick, N. P.

    2012-12-01

    Policy initiatives to reduce greenhouse gas emissions (GHG) have promoted the development of agricultural management protocols to increase SOC storage and reduce GHG emissions. We review approaches for quantifying N2O flux from agricultural landscapes. We summarize the temporal and spatial extent of observations across representative soil classes, climate zones, cropping systems, and management scenarios. We review applications of simulation and empirical modeling approaches and compare validation outcomes across modeling tools. Subsequently, we review current model application in agricultural management protocols. In particular, we compare approaches adapted for compliance with the California Global Warming Solutions Act, the Alberta Climate Change and Emissions Management Act, and by the American Carbon Registry. In the absence of regional data to drive model development, policies that require GHG quantification often use simple empirical models based on highly aggregated data of N2O flux as a function of applied N - Tier 1 models according to IPCC categorization. As participants in development of protocols that could be used in carbon offset markets, we observed that stakeholders outside of the biogeochemistry community favored outcomes from simulation modeling (Tier 3) rather than empirical modeling (Tier 2). In contrast, scientific advisors were more accepting of outcomes based on statistical approaches that rely on local observations, and their views sometimes swayed policy practitioners over the course of policy development. Both Tier 2 and Tier 3 approaches have been implemented in current policy development, and it is important that the strengths and limitations of both approaches, in the face of available data, be well-understood by those drafting and adopting policies and protocols. The reliability of all models is contingent on sufficient observations for model development and validation. Simulation models applied without site-calibration generally result in poor validation results, and this point particularly needs to be emphasized during policy development. For cases where sufficient calibration data are available, simulation models have demonstrated the ability to capture seasonal patterns of N2O flux. The reliability of statistical models likewise depends on data availability. Because soil moisture is a significant driver of N2O flux, the best outcomes occur when empirical models are applied to systems with relevant soil classification and climate. The structure of current carbon offset protocols is not well-aligned with a budgetary approach to GHG accounting. Current protocols credit field-scale reduction in N2O flux as a result of reduced fertilizer use. Protocols do not award farmers credit for reductions in CO2 emissions resulting from reduced production of synthetic N fertilizer. To achieve the greatest GHG emission reductions through reduced synthetic N production and reduced landscape N saturation requires a re-envisioning of the agricultural landscape to include cropping systems with legume and manure N sources. The current focus on on-farm GHG sources focuses credits on simple reductions of N applied in conventional systems rather than on developing cropping systems which promote higher recycling and retention of N.

  13. Seeking for the rational basis of the median model: the optimal combination of multi-model ensemble results

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Giunta, G.; Galmarini, S.

    2007-04-01

    In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  14. Seeking for the rational basis of the Median Model: the optimal combination of multi-model ensemble results

    NASA Astrophysics Data System (ADS)

    Riccio, A.; Giunta, G.; Galmarini, S.

    2007-12-01

    In this paper we present an approach for the statistical analysis of multi-model ensemble results. The models considered here are operational long-range transport and dispersion models, also used for the real-time simulation of pollutant dispersion or the accidental release of radioactive nuclides. We first introduce the theoretical basis (with its roots sinking into the Bayes theorem) and then apply this approach to the analysis of model results obtained during the ETEX-1 exercise. We recover some interesting results, supporting the heuristic approach called "median model", originally introduced in Galmarini et al. (2004a, b). This approach also provides a way to systematically reduce (and quantify) model uncertainties, thus supporting the decision-making process and/or regulatory-purpose activities in a very effective manner.

  15. Switching Kalman filter for failure prognostic

    NASA Astrophysics Data System (ADS)

    Lim, Chi Keong Reuben; Mba, David

    2015-02-01

    The use of condition monitoring (CM) data to predict remaining useful life have been growing with increasing use of health and usage monitoring systems on aircraft. There are many data-driven methodologies available for the prediction and popular ones include artificial intelligence and statistical based approach. The drawback of such approaches is that they require a lot of failure data for training which can be scarce in practice. In lieu of this, methods using state-space and regression-based models that extract information from the data history itself have been explored. However, such methods have their own limitations as they utilize a single time-invariant model which does not represent changing degradation path well. This causes most degradation modeling studies to focus only on segments of their CM data that behaves close to the assumed model. In this paper, a state-space based method; the Switching Kalman Filter (SKF), is adopted for model estimation and life prediction. The SKF approach however, uses multiple models from which the most probable model is inferred from the CM data using Bayesian estimation before it is applied for prediction. At the same time, the inference of the degradation model itself can provide maintainers with more information for their planning. This SKF approach is demonstrated with a case study on gearbox bearings that were found defective from the Republic of Singapore Air Force AH64D helicopter. The use of in-service CM data allows the approach to be applied in a practical scenario and results showed that the developed SKF approach is a promising tool to support maintenance decision-making.

  16. Descriptive vs. mechanistic network models in plant development in the post-genomic era.

    PubMed

    Davila-Velderrain, J; Martinez-Garcia, J C; Alvarez-Buylla, E R

    2015-01-01

    Network modeling is now a widespread practice in systems biology, as well as in integrative genomics, and it constitutes a rich and diverse scientific research field. A conceptually clear understanding of the reasoning behind the main existing modeling approaches, and their associated technical terminologies, is required to avoid confusions and accelerate the transition towards an undeniable necessary more quantitative, multidisciplinary approach to biology. Herein, we focus on two main network-based modeling approaches that are commonly used depending on the information available and the intended goals: inference-based methods and system dynamics approaches. As far as data-based network inference methods are concerned, they enable the discovery of potential functional influences among molecular components. On the other hand, experimentally grounded network dynamical models have been shown to be perfectly suited for the mechanistic study of developmental processes. How do these two perspectives relate to each other? In this chapter, we describe and compare both approaches and then apply them to a given specific developmental module. Along with the step-by-step practical implementation of each approach, we also focus on discussing their respective goals, utility, assumptions, and associated limitations. We use the gene regulatory network (GRN) involved in Arabidopsis thaliana Root Stem Cell Niche patterning as our illustrative example. We show that descriptive models based on functional genomics data can provide important background information consistent with experimentally supported functional relationships integrated in mechanistic GRN models. The rationale of analysis and modeling can be applied to any other well-characterized functional developmental module in multicellular organisms, like plants and animals.

  17. Predicting future protection of respirator users: Statistical approaches and practical implications.

    PubMed

    Hu, Chengcheng; Harber, Philip; Su, Jing

    2016-01-01

    The purpose of this article is to describe a statistical approach for predicting a respirator user's fit factor in the future based upon results from initial tests. A statistical prediction model was developed based upon joint distribution of multiple fit factor measurements over time obtained from linear mixed effect models. The model accounts for within-subject correlation as well as short-term (within one day) and longer-term variability. As an example of applying this approach, model parameters were estimated from a research study in which volunteers were trained by three different modalities to use one of two types of respirators. They underwent two quantitative fit tests at the initial session and two on the same day approximately six months later. The fitted models demonstrated correlation and gave the estimated distribution of future fit test results conditional on past results for an individual worker. This approach can be applied to establishing a criterion value for passing an initial fit test to provide reasonable likelihood that a worker will be adequately protected in the future; and to optimizing the repeat fit factor test intervals individually for each user for cost-effective testing.

  18. Three novel approaches to structural identifiability analysis in mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2016-05-06

    Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. A Model for Applying Lexical Approach in Teaching Russian Grammar.

    ERIC Educational Resources Information Center

    Gettys, Serafima

    The lexical approach to teaching Russian grammar is explained, an instructional sequence is outlined, and a classroom study testing the effectiveness of the approach is reported. The lexical approach draws on research on cognitive psychology, second language acquisition theory, and research on learner language. Its bases in research and its…

  20. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  1. A comparison of scope for growth (SFG) and dynamic energy budget (DEB) models applied to the blue mussel ( Mytilus edulis)

    NASA Astrophysics Data System (ADS)

    Filgueira, Ramón; Rosland, Rune; Grant, Jon

    2011-11-01

    Growth of Mytilus edulis was simulated using individual based models following both Scope For Growth (SFG) and Dynamic Energy Budget (DEB) approaches. These models were parameterized using independent studies and calibrated for each dataset by adjusting the half-saturation coefficient of the food ingestion function term, XK, a common parameter in both approaches related to feeding behavior. Auto-calibration was carried out using an optimization tool, which provides an objective way of tuning the model. Both approaches yielded similar performance, suggesting that although the basis for constructing the models is different, both can successfully reproduce M. edulis growth. The good performance of both models in different environments achieved by adjusting a single parameter, XK, highlights the potential of these models for (1) producing prospective analysis of mussel growth and (2) investigating mussel feeding response in different ecosystems. Finally, we emphasize that the convergence of two different modeling approaches via calibration of XK, indicates the importance of the feeding behavior and local trophic conditions for bivalve growth performance. Consequently, further investigations should be conducted to explore the relationship of XK to environmental variables and/or to the sophistication of the functional response to food availability with the final objective of creating a general model that can be applied to different ecosystems without the need for calibration.

  2. A graphical vector autoregressive modelling approach to the analysis of electronic diary data

    PubMed Central

    2010-01-01

    Background In recent years, electronic diaries are increasingly used in medical research and practice to investigate patients' processes and fluctuations in symptoms over time. To model dynamic dependence structures and feedback mechanisms between symptom-relevant variables, a multivariate time series method has to be applied. Methods We propose to analyse the temporal interrelationships among the variables by a structural modelling approach based on graphical vector autoregressive (VAR) models. We give a comprehensive description of the underlying concepts and explain how the dependence structure can be recovered from electronic diary data by a search over suitable constrained (graphical) VAR models. Results The graphical VAR approach is applied to the electronic diary data of 35 obese patients with and without binge eating disorder (BED). The dynamic relationships for the two subgroups between eating behaviour, depression, anxiety and eating control are visualized in two path diagrams. Results show that the two subgroups of obese patients with and without BED are distinguishable by the temporal patterns which influence their respective eating behaviours. Conclusion The use of the graphical VAR approach for the analysis of electronic diary data leads to a deeper insight into patient's dynamics and dependence structures. An increasing use of this modelling approach could lead to a better understanding of complex psychological and physiological mechanisms in different areas of medical care and research. PMID:20359333

  3. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  4. Learning predictive models that use pattern discovery--a bootstrap evaluative approach applied in organ functioning sequences.

    PubMed

    Toma, Tudor; Bosman, Robert-Jan; Siebes, Arno; Peek, Niels; Abu-Hanna, Ameen

    2010-08-01

    An important problem in the Intensive Care is how to predict on a given day of stay the eventual hospital mortality for a specific patient. A recent approach to solve this problem suggested the use of frequent temporal sequences (FTSs) as predictors. Methods following this approach were evaluated in the past by inducing a model from a training set and validating the prognostic performance on an independent test set. Although this evaluative approach addresses the validity of the specific models induced in an experiment, it falls short of evaluating the inductive method itself. To achieve this, one must account for the inherent sources of variation in the experimental design. The main aim of this work is to demonstrate a procedure based on bootstrapping, specifically the .632 bootstrap procedure, for evaluating inductive methods that discover patterns, such as FTSs. A second aim is to apply this approach to find out whether a recently suggested inductive method that discovers FTSs of organ functioning status is superior over a traditional method that does not use temporal sequences when compared on each successive day of stay at the Intensive Care Unit. The use of bootstrapping with logistic regression using pre-specified covariates is known in the statistical literature. Using inductive methods of prognostic models based on temporal sequence discovery within the bootstrap procedure is however novel at least in predictive models in the Intensive Care. Our results of applying the bootstrap-based evaluative procedure demonstrate the superiority of the FTS-based inductive method over the traditional method in terms of discrimination as well as accuracy. In addition we illustrate the insights gained by the analyst into the discovered FTSs from the bootstrap samples. Copyright 2010 Elsevier Inc. All rights reserved.

  5. Final Technical Report: Distributed Controls for High Penetrations of Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Byrne, Raymond H.; Neely, Jason C.; Rashkin, Lee J.

    2015-12-01

    The goal of this effort was to apply four potential control analysis/design approaches to the design of distributed grid control systems to address the impact of latency and communications uncertainty with high penetrations of photovoltaic (PV) generation. The four techniques considered were: optimal fixed structure control; Nyquist stability criterion; vector Lyapunov analysis; and Hamiltonian design methods. A reduced order model of the Western Electricity Coordinating Council (WECC) developed for the Matlab Power Systems Toolbox (PST) was employed for the study, as well as representative smaller systems (e.g., a two-area, three-area, and four-area power system). Excellent results were obtained with themore » optimal fixed structure approach, and the methodology we developed was published in a journal article. This approach is promising because it offers a method for designing optimal control systems with the feedback signals available from Phasor Measurement Unit (PMU) data as opposed to full state feedback or the design of an observer. The Nyquist approach inherently handles time delay and incorporates performance guarantees (e.g., gain and phase margin). We developed a technique that works for moderate sized systems, but the approach does not scale well to extremely large system because of computational complexity. The vector Lyapunov approach was applied to a two area model to demonstrate the utility for modeling communications uncertainty. Application to large power systems requires a method to automatically expand/contract the state space and partition the system so that communications uncertainty can be considered. The Hamiltonian Surface Shaping and Power Flow Control (HSSPFC) design methodology was selected to investigate grid systems for energy storage requirements to support high penetration of variable or stochastic generation (such as wind and PV) and loads. This method was applied to several small system models.« less

  6. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    PubMed

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Comparing simple and complex approaches to simulate the impacts of soil water repellency on runoff and erosion in burnt Mediterranean forest slopes

    NASA Astrophysics Data System (ADS)

    Nunes, João Pedro; Catarina Simões Vieira, Diana; Keizer, Jan Jacob

    2017-04-01

    Fires impact soil hydrological properties, enhancing soil water repellency and therefore increasing the potential for surface runoff generation and soil erosion. In consequence, the successful application of hydrological models to post-fire conditions requires the appropriate simulation of the effects of soil water repellency on soil hydrology. This work compared three approaches to model soil water repellency impacts on soil hydrology in burnt eucalypt and pine forest slopes in central Portugal: 1) Daily approach, simulating repellency as a function of soil moisture, and influencing the maximum soil available water holding capacity. It is based on the Thornthwaite-Mather soil water modelling approach, and is parameterized with the soil's wilting point and field capacity, and a parameter relating soil water repellency with water holding capacity. It was tested with soil moisture data from burnt and unburnt hillslopes. This approach was able to simulate post-fire soil moisture patterns, which the model without repellency was unable to do. However, model parameters were different between the burnt and unburnt slopes, indicating that more research is needed to derive standardized parameters from commonly measured soil and vegetation properties. 2) Seasonal approach, pre-determining repellency at the seasonal scale (3 months) in four classes (from none to extreme). It is based on the Morgan-Morgan-Finney (MMF) runoff and erosion model, applied at the seasonal scale and is parameterized with a parameter relating repellency class with field capacity. It was tested with runoff and erosion data from several experimental plots, and led to important improvements on runoff prediction over an approach with constant field capacity for all seasons (calibrated for repellency effects), but only slight improvements in erosion predictions. In contrast with the daily approach, the parameters could be reproduced between different sites 3) Constant approach, specifying values for soil water repellency for the three years after the fire, and keeping them constant throughout the year. It is based on a daily Curve Number (CN) approach, and was incorporated directly in the Soil and Water Assessment Tool (SWAT) model and tested with erosion data from a burnt hillslope. This approach was able to successfully reproduce soil erosion. The results indicate that simplified approaches can be used to adapt existing models for post-fire simulation, taking repellency into account. Taking into account the seasonality of repellency seems more important to simulate surface runoff than erosion, possibly since simulating the larger runoff rates correctly is sufficient for erosion simulation. The constant approach can be applied directly in the parameterization of existing runoff and erosion models for soil loss and sediment yield prediction, while the seasonal approach can readily be developed as a next step, with further work being needed to assess if the approach and associated parameters can be applied in multiple post-fire environments.

  8. Predicting relationship between magnetostriction and applied field of magnetostrictive composites

    NASA Astrophysics Data System (ADS)

    Guan, Xinchun; Dong, Xufeng; Ou, Jinping

    2008-03-01

    Consideration of demagnetization effect, the model used to calculate the magnetostriction of single particle under the applied field is firstly built up. Then treating the magnetostriction particulate as an eigenstrain, based on Eshelby equivalent inclusion and Mori-Tanaka method, the approach to calculate average magnetostriction of the composites under any applied field as well as saturation is studied. Results calculated by the approach indicate that saturation magnetostriction of magnetostrictive composites increases with increasing of particle aspect, particle volume fraction and decreasing of Young' modulus of matrix, and the influence of applied field on magnetostriction of the composites becomes more significant with larger particle volume fraction or particle aspect.

  9. An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach

    NASA Astrophysics Data System (ADS)

    Chu, A.; Ogata, Y.; Katsura, K.

    2010-12-01

    For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.

  10. Positive feedback : exploring current approaches in iterative travel demand model implementation.

    DOT National Transportation Integrated Search

    2012-01-01

    Currently, the models that TxDOTs Transportation Planning and Programming Division (TPP) developed are : traditional three-step models (i.e., trip generation, trip distribution, and traffic assignment) that are sequentially : applied. A limitation...

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Potash, Peter J.; Bell, Eric B.; Harrison, Joshua J.

    Predictive models for tweet deletion have been a relatively unexplored area of Twitter-related computational research. We first approach the deletion of tweets as a spam detection problem, applying a small set of handcrafted features to improve upon the current state-of-the- art in predicting deleted tweets. Next, we apply our approach to a dataset of deleted tweets that better reflects the current deletion rate. Since tweets are deleted for reasons beyond just the presence of spam, we apply topic modeling and text embeddings in order to capture the semantic content of tweets that can lead to tweet deletion. Our goal ismore » to create an effective model that has a low-dimensional feature space and is also language-independent. A lean model would be computationally advantageous processing high-volumes of Twitter data, which can reach 9,885 tweets per second. Our results show that a small set of spam-related features combined with word topics and character-level text embeddings provide the best f1 when trained with a random forest model. The highest precision of the deleted tweet class is achieved by a modification of paragraph2vec to capture author identity.« less

  12. Continuous-time discrete-space models for animal movement

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.

    2015-01-01

    The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.

  13. Guidelines for applying the Composite Specification Model (CSM)

    NASA Technical Reports Server (NTRS)

    Agresti, William

    1987-01-01

    The Composite Specification Model (CSM) is an approach to representing software requirements. Guidelines are provided for applying CSM and developing each of the three descriptive views of the software: the contextual view, using entities and relationships; the dynamic view, using states and transitions; and the function view, using data flows and processes. Using CSM results in a software specification document, which is outlined.

  14. Continuum-Kinetic Models and Numerical Methods for Multiphase Applications

    NASA Astrophysics Data System (ADS)

    Nault, Isaac Michael

    This thesis presents a continuum-kinetic approach for modeling general problems in multiphase solid mechanics. In this context, a continuum model refers to any model, typically on the macro-scale, in which continuous state variables are used to capture the most important physics: conservation of mass, momentum, and energy. A kinetic model refers to any model, typically on the meso-scale, which captures the statistical motion and evolution of microscopic entitites. Multiphase phenomena usually involve non-negligible micro or meso-scopic effects at the interfaces between phases. The approach developed in the thesis attempts to combine the computational performance benefits of a continuum model with the physical accuracy of a kinetic model when applied to a multiphase problem. The approach is applied to modeling a single particle impact in Cold Spray, an engineering process that intimately involves the interaction of crystal grains with high-magnitude elastic waves. Such a situation could be classified a multiphase application due to the discrete nature of grains on the spatial scale of the problem. For this application, a hyper elasto-plastic model is solved by a finite volume method with approximate Riemann solver. The results of this model are compared for two types of plastic closure: a phenomenological macro-scale constitutive law, and a physics-based meso-scale Crystal Plasticity model.

  15. Can metric-based approaches really improve multi-model climate projections? A perfect model framework applied to summer temperature change in France.

    NASA Astrophysics Data System (ADS)

    Boé, Julien; Terray, Laurent

    2014-05-01

    Ensemble approaches for climate change projections have become ubiquitous. Because of large model-to-model variations and, generally, lack of rationale for the choice of a particular climate model against others, it is widely accepted that future climate change and its impacts should not be estimated based on a single climate model. Generally, as a default approach, the multi-model ensemble mean (MMEM) is considered to provide the best estimate of climate change signals. The MMEM approach is based on the implicit hypothesis that all the models provide equally credible projections of future climate change. This hypothesis is unlikely to be true and ideally one would want to give more weight to more realistic models. A major issue with this alternative approach lies in the assessment of the relative credibility of future climate projections from different climate models, as they can only be evaluated against present-day observations: which present-day metric(s) should be used to decide which models are "good" and which models are "bad" in the future climate? Once a supposedly informative metric has been found, other issues arise. What is the best statistical method to combine multiple models results taking into account their relative credibility measured by a given metric? How to be sure in the end that the metric-based estimate of future climate change is not in fact less realistic than the MMEM? It is impossible to provide strict answers to those questions in the climate change context. Yet, in this presentation, we propose a methodological approach based on a perfect model framework that could bring some useful elements of answer to the questions previously mentioned. The basic idea is to take a random climate model in the ensemble and treat it as if it were the truth (results of this model, in both past and future climate, are called "synthetic observations"). Then, all the other members from the multi-model ensemble are used to derive thanks to a metric-based approach a posterior estimate of climate change, based on the synthetic observation of the metric. Finally, it is possible to compare the posterior estimate to the synthetic observation of future climate change to evaluate the skill of the method. The main objective of this presentation is to describe and apply this perfect model framework to test different methodological issues associated with non-uniform model weighting and similar metric-based approaches. The methodology presented is general, but will be applied to the specific case of summer temperature change in France, for which previous works have suggested potentially useful metrics associated with soil-atmosphere and cloud-temperature interactions. The relative performances of different simple statistical approaches to combine multiple model results based on metrics will be tested. The impact of ensemble size, observational errors, internal variability, and model similarity will be characterized. The potential improvements associated with metric-based approaches compared to the MMEM is terms of errors and uncertainties will be quantified.

  16. Game Theoretic Modeling of Water Resources Allocation Under Hydro-Climatic Uncertainty

    NASA Astrophysics Data System (ADS)

    Brown, C.; Lall, U.; Siegfried, T.

    2005-12-01

    Typical hydrologic and economic modeling approaches rely on assumptions of climate stationarity and economic conditions of ideal markets and rational decision-makers. In this study, we incorporate hydroclimatic variability with a game theoretic approach to simulate and evaluate common water allocation paradigms. Game Theory may be particularly appropriate for modeling water allocation decisions. First, a game theoretic approach allows economic analysis in situations where price theory doesn't apply, which is typically the case in water resources where markets are thin, players are few, and rules of exchange are highly constrained by legal or cultural traditions. Previous studies confirm that game theory is applicable to water resources decision problems, yet applications and modeling based on these principles is only rarely observed in the literature. Second, there are numerous existing theoretical and empirical studies of specific games and human behavior that may be applied in the development of predictive water allocation models. With this framework, one can evaluate alternative orderings and rules regarding the fraction of available water that one is allowed to appropriate. Specific attributes of the players involved in water resources management complicate the determination of solutions to game theory models. While an analytical approach will be useful for providing general insights, the variety of preference structures of individual players in a realistic water scenario will likely require a simulation approach. We propose a simulation approach incorporating the rationality, self-interest and equilibrium concepts of game theory with an agent-based modeling framework that allows the distinct properties of each player to be expressed and allows the performance of the system to manifest the integrative effect of these factors. Underlying this framework, we apply a realistic representation of spatio-temporal hydrologic variability and incorporate the impact of decision-making a priori to hydrologic realizations and those made a posteriori on alternative allocation mechanisms. Outcomes are evaluated in terms of water productivity, net social benefit and equity. The performance of hydro-climate prediction modeling in each allocation mechanism will be assessed. Finally, year-to-year system performance and feedback pathways are explored. In this way, the system can be adaptively managed toward equitable and efficient water use.

  17. Boolean network inference from time series data incorporating prior biological knowledge.

    PubMed

    Haider, Saad; Pal, Ranadip

    2012-01-01

    Numerous approaches exist for modeling of genetic regulatory networks (GRNs) but the low sampling rates often employed in biological studies prevents the inference of detailed models from experimental data. In this paper, we analyze the issues involved in estimating a model of a GRN from single cell line time series data with limited time points. We present an inference approach for a Boolean Network (BN) model of a GRN from limited transcriptomic or proteomic time series data based on prior biological knowledge of connectivity, constraints on attractor structure and robust design. We applied our inference approach to 6 time point transcriptomic data on Human Mammary Epithelial Cell line (HMEC) after application of Epidermal Growth Factor (EGF) and generated a BN with a plausible biological structure satisfying the data. We further defined and applied a similarity measure to compare synthetic BNs and BNs generated through the proposed approach constructed from transitions of various paths of the synthetic BNs. We have also compared the performance of our algorithm with two existing BN inference algorithms. Through theoretical analysis and simulations, we showed the rarity of arriving at a BN from limited time series data with plausible biological structure using random connectivity and absence of structure in data. The framework when applied to experimental data and data generated from synthetic BNs were able to estimate BNs with high similarity scores. Comparison with existing BN inference algorithms showed the better performance of our proposed algorithm for limited time series data. The proposed framework can also be applied to optimize the connectivity of a GRN from experimental data when the prior biological knowledge on regulators is limited or not unique.

  18. Application of a number-conserving boson expansion theory to Ginocchio's SO(8) model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.h.; Pedrocchi, V.G.; Tamura, T.

    1986-05-01

    A boson expansion theory based on a number-conserving quasiparticle approach is applied to Ginocchio's SO(8) fermion model. Energy spectra and E2 transition rates calculated by using this new boson mapping are presented and compared against the exact fermion values. A comparison with other boson approaches is also given.

  19. Closed-form solution of the Ogden-Hill's compressible hyperelastic model for ramp loading

    NASA Astrophysics Data System (ADS)

    Berezvai, Szabolcs; Kossa, Attila

    2017-05-01

    This article deals with the visco-hyperelastic modelling approach for compressible polymer foam materials. Polymer foams can exhibit large elastic strains and displacements in case of volumetric compression. In addition, they often show significant rate-dependent properties. This material behaviour can be accurately modelled using the visco-hyperelastic approach, in which the large strain viscoelastic description is combined with the rate-independent hyperelastic material model. In case of polymer foams, the most widely used compressible hyperelastic material model, the so-called Ogden-Hill's model, was applied, which is implemented in the commercial finite element (FE) software Abaqus. The visco-hyperelastic model is defined in hereditary integral form, therefore, obtaining a closed-form solution for the stress is not a trivial task. However, the parameter-fitting procedure could be much faster and accurate if closed-form solution exists. In this contribution, exact stress solutions are derived in case of uniaxial, biaxial and volumetric compression loading cases using ramp-loading history. The analytical stress solutions are compared with the stress results in Abaqus using FE analysis. In order to highlight the benefits of the analytical closed-form solution during the parameter-fitting process experimental work has been carried out on a particular open-cell memory foam material. The results of the material identification process shows significant accuracy improvement in the fitting procedure by applying the derived analytical solutions compared to the so-called separated approach applied in the engineering practice.

  20. Taking the mystery out of mathematical model applications to karst aquifers—A primer

    USGS Publications Warehouse

    Kuniansky, Eve L.

    2014-01-01

    Advances in mathematical model applications toward the understanding of the complex flow, characterization, and water-supply management issues for karst aquifers have occurred in recent years. Different types of mathematical models can be applied successfully if appropriate information is available and the problems are adequately identified. The mathematical approaches discussed in this paper are divided into three major categories: 1) distributed parameter models, 2) lumped parameter models, and 3) fitting models. The modeling approaches are described conceptually with examples (but without equations) to help non-mathematicians understand the applications.

  1. Deformation-Aware Log-Linear Models

    NASA Astrophysics Data System (ADS)

    Gass, Tobias; Deselaers, Thomas; Ney, Hermann

    In this paper, we present a novel deformation-aware discriminative model for handwritten digit recognition. Unlike previous approaches our model directly considers image deformations and allows discriminative training of all parameters, including those accounting for non-linear transformations of the image. This is achieved by extending a log-linear framework to incorporate a latent deformation variable. The resulting model has an order of magnitude less parameters than competing approaches to handling image deformations. We tune and evaluate our approach on the USPS task and show its generalization capabilities by applying the tuned model to the MNIST task. We gain interesting insights and achieve highly competitive results on both tasks.

  2. Predicting Fish Densities in Lotic Systems: a Simple Modeling Approach

    EPA Science Inventory

    Fish density models are essential tools for fish ecologists and fisheries managers. However, applying these models can be difficult because of high levels of model complexity and the large number of parameters that must be estimated. We designed a simple fish density model and te...

  3. Latent spatial models and sampling design for landscape genetics

    USGS Publications Warehouse

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  4. Two controller design approaches for decentralized systems

    NASA Technical Reports Server (NTRS)

    Ozguner, U.; Khorrami, F.; Iftar, A.

    1988-01-01

    Two different philosophies for designing the controllers of decentralized systems are considered within a quadratic regulator framework which is generalized to admit decentralized frequency weighting. In the first approach, the total system model is examined, and the feedback strategy for each channel or subsystem is determined. In the second approach, separate, possibly overlapping, and uncoupled models are analyzed for each channel, and the results can be combined to study the original system. The two methods are applied to the example of a model of the NASA COFS Mast Flight System.

  5. A soil moisture accounting-procedure with a Richards' equation-based soil texture-dependent parameterization

    USDA-ARS?s Scientific Manuscript database

    Given a time series of potential evapotranspiration and rainfall data, there are at least two approaches for estimating vertical percolation rates. One approach involves solving Richards' equation (RE) with a plant uptake model. An alternative approach involves applying a simple soil moisture accoun...

  6. Health benefit modelling and optimization of vehicular pollution control strategies

    NASA Astrophysics Data System (ADS)

    Sonawane, Nayan V.; Patil, Rashmi S.; Sethi, Virendra

    2012-12-01

    This study asserts that the evaluation of pollution reduction strategies should be approached on the basis of health benefits. The framework presented could be used for decision making on the basis of cost effectiveness when the strategies are applied concurrently. Several vehicular pollution control strategies have been proposed in literature for effective management of urban air pollution. The effectiveness of these strategies has been mostly studied as a one at a time approach on the basis of change in pollution concentration. The adequacy and practicality of such an approach is studied in the present work. Also, the assessment of respective benefits of these strategies has been carried out when they are implemented simultaneously. An integrated model has been developed which can be used as a tool for optimal prioritization of various pollution management strategies. The model estimates health benefits associated with specific control strategies. ISC-AERMOD View has been used to provide the cause-effect relation between control options and change in ambient air quality. BenMAP, developed by U.S. EPA, has been applied for estimation of health and economic benefits associated with various management strategies. Valuation of health benefits has been done for impact indicators of premature mortality, hospital admissions and respiratory syndrome. An optimization model has been developed to maximize overall social benefits with determination of optimized percentage implementations for multiple strategies. The model has been applied for sub-urban region of Mumbai city for vehicular sector. Several control scenarios have been considered like revised emission standards, electric, CNG, LPG and hybrid vehicles. Reduction in concentration and resultant health benefits for the pollutants CO, NOx and particulate matter are estimated for different control scenarios. Finally, an optimization model has been applied to determine optimized percentage implementation of specific control strategies with maximization of social benefits, when these strategies are applied simultaneously.

  7. Hydrological modelling in forested systems | Science ...

    EPA Pesticide Factsheets

    This chapter provides a brief overview of forest hydrology modelling approaches for answering important global research and management questions. Many hundreds of hydrological models have been applied globally across multiple decades to represent and predict forest hydrological processes. The focus of this chapter is on process-based models and approaches, specifically 'forest hydrology models'; that is, physically based simulation tools that quantify compartments of the forest hydrological cycle. Physically based models can be considered those that describe the conservation of mass, momentum and/or energy. The purpose of this chapter is to provide a brief overview of forest hydrology modeling approaches for answering important global research and management questions. The focus of this chapter is on process-based models and approaches, specifically “forest hydrology models”, i.e., physically-based simulation tools that quantify compartments of the forest hydrological cycle.

  8. Poisson, Poisson-gamma and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory.

    PubMed

    Lord, Dominique; Washington, Simon P; Ivan, John N

    2005-01-01

    There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states-perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of "excess" zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to "excess" zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed-and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros.

  9. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  10. On The Modeling of Educational Systems: II

    ERIC Educational Resources Information Center

    Grauer, Robert T.

    1975-01-01

    A unified approach to model building is developed from the separate techniques of regression, simulation, and factorial design. The methodology is applied in the context of a suburban school district. (Author/LS)

  11. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  12. Constraining Unsaturated Hydraulic Parameters Using the Latin Hypercube Sampling Method and Coupled Hydrogeophysical Approach

    NASA Astrophysics Data System (ADS)

    Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.

    2017-12-01

    The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.

  13. A Structural Modeling Approach to a Multilevel Random Coefficients Model.

    ERIC Educational Resources Information Center

    Rovine, Michael J.; Molenaar, Peter C. M.

    2000-01-01

    Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)

  14. A Novel Approach to Model the Air-Side Heat Transfer in Microchannel Condensers

    NASA Astrophysics Data System (ADS)

    Martínez-Ballester, S.; Corberán, José-M.; Gonzálvez-Maciá, J.

    2012-11-01

    The work presents a model (Fin1D×3) for microchannel condensers and gas coolers. The paper focusses on the description of the novel approach employed to model the air-side heat transfer. The model applies a segment-by-segment discretization to the heat exchanger adding, in each segment, a specific bi-dimensional grid to the air flow and fin wall. Given this discretization, the fin theory is applied by using a continuous piecewise function for the fin wall temperature. It allows taking into account implicitly the heat conduction between tubes along the fin, and the unmixed air influence on the heat capacity. The model has been validated against experimental data resulting in predicted capacity errors within ± 5%. Differences on prediction results and computational cost were studied and compared with the previous authors' model (Fin2D) and with other simplified model. Simulation time of the proposed model was reduced one order of magnitude respect the Fin2D's time retaining its same accuracy.

  15. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  16. Up-scaling of multi-variable flood loss models from objects to land use units at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Schröter, Kai; Merz, Bruno

    2016-05-01

    Flood risk management increasingly relies on risk analyses, including loss modelling. Most of the flood loss models usually applied in standard practice have in common that complex damaging processes are described by simple approaches like stage-damage functions. Novel multi-variable models significantly improve loss estimation on the micro-scale and may also be advantageous for large-scale applications. However, more input parameters also reveal additional uncertainty, even more in upscaling procedures for meso-scale applications, where the parameters need to be estimated on a regional area-wide basis. To gain more knowledge about challenges associated with the up-scaling of multi-variable flood loss models the following approach is applied: Single- and multi-variable micro-scale flood loss models are up-scaled and applied on the meso-scale, namely on basis of ATKIS land-use units. Application and validation is undertaken in 19 municipalities, which were affected during the 2002 flood by the River Mulde in Saxony, Germany by comparison to official loss data provided by the Saxon Relief Bank (SAB).In the meso-scale case study based model validation, most multi-variable models show smaller errors than the uni-variable stage-damage functions. The results show the suitability of the up-scaling approach, and, in accordance with micro-scale validation studies, that multi-variable models are an improvement in flood loss modelling also on the meso-scale. However, uncertainties remain high, stressing the importance of uncertainty quantification. Thus, the development of probabilistic loss models, like BT-FLEMO used in this study, which inherently provide uncertainty information are the way forward.

  17. Comparison of regression coefficient and GIS-based methodologies for regional estimates of forest soil carbon stocks.

    PubMed

    Campbell, J Elliott; Moen, Jeremie C; Ney, Richard A; Schnoor, Jerald L

    2008-03-01

    Estimates of forest soil organic carbon (SOC) have applications in carbon science, soil quality studies, carbon sequestration technologies, and carbon trading. Forest SOC has been modeled using a regression coefficient methodology that applies mean SOC densities (mass/area) to broad forest regions. A higher resolution model is based on an approach that employs a geographic information system (GIS) with soil databases and satellite-derived landcover images. Despite this advancement, the regression approach remains the basis of current state and federal level greenhouse gas inventories. Both approaches are analyzed in detail for Wisconsin forest soils from 1983 to 2001, applying rigorous error-fixing algorithms to soil databases. Resulting SOC stock estimates are 20% larger when determined using the GIS method rather than the regression approach. Average annual rates of increase in SOC stocks are 3.6 and 1.0 million metric tons of carbon per year for the GIS and regression approaches respectively.

  18. Models for estimating runway landing capacity with Microwave Landing System (MLS)

    NASA Technical Reports Server (NTRS)

    Tosic, V.; Horonjeff, R.

    1975-01-01

    A model is developed which is capable of computing the ultimate landing runway capacity, under ILS and MLS conditions, when aircraft population characteristics and air traffic control separation rules are given. This model can be applied in situations when only a horizontal separation between aircraft approaching a runway is allowed, as well as when both vertical and horizontal separations are possible. It is assumed that the system is free of errors, that is that aircraft arrive at specified points along the prescribed flight path precisely when the controllers intend for them to arrive at these points. Although in the real world there is no such thing as an error-free system, the assumption is adequate for a qualitative comparison of MLS with ILS. Results suggest that an increase in runway landing capacity, caused by introducing the MLS multiple approach paths, is to be expected only when an aircraft population consists of aircraft with significantly differing approach speeds and particularly in situations when vertical separation can be applied. Vertical separation can only be applied if one of the types of aircraft in the mix has a very steep descent angle.

  19. Estimation and impact assessment of input and parameter uncertainty in predicting groundwater flow with a fully distributed model

    NASA Astrophysics Data System (ADS)

    Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke

    2017-04-01

    Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.

  20. Drug scheduling of cancer chemotherapy based on natural actor-critic approach.

    PubMed

    Ahn, Inkyung; Park, Jooyoung

    2011-11-01

    Recently, reinforcement learning methods have drawn significant interests in the area of artificial intelligence, and have been successfully applied to various decision-making problems. In this paper, we study the applicability of the NAC (natural actor-critic) approach, a state-of-the-art reinforcement learning method, to the drug scheduling of cancer chemotherapy for an ODE (ordinary differential equation)-based tumor growth model. ODE-based cancer dynamics modeling is an active research area, and many different mathematical models have been proposed. Among these, we use the model proposed by de Pillis and Radunskaya (2003), which considers the growth of tumor cells and their interaction with normal cells and immune cells. The NAC approach is applied to this ODE model with the goal of minimizing the tumor cell population and the drug amount while maintaining the adequate population levels of normal cells and immune cells. In the framework of the NAC approach, the drug dose is regarded as the control input, and the reward signal is defined as a function of the control input and the cell populations of tumor cells, normal cells, and immune cells. According to the control policy found by the NAC approach, effective drug scheduling in cancer chemotherapy for the considered scenarios has turned out to be close to the strategy of continuing drug injection from the beginning until an appropriate time. Also, simulation results showed that the NAC approach can yield better performance than conventional pulsed chemotherapy. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  1. The Applied Behavior Analytic Heritage of PBS: A Dynamic Model of Action-Oriented Research

    ERIC Educational Resources Information Center

    Dunlap, Glen; Horner, Robert H., Ed.

    2006-01-01

    In the past two decades, positive behavior support (PBS) has emerged from applied behavior analysis (ABA) as a newly fashioned approach to problems of behavioral adaptation. ABA was established in the 1960s as a science in which learning principles are systematically applied to produce socially important changes in behavior, whereas PBS was…

  2. A Multimodal Approach to Counselor Supervision.

    ERIC Educational Resources Information Center

    Ponterotto, Joseph G.; Zander, Toni A.

    1984-01-01

    Represents an initial effort to apply Lazarus's multimodal approach to a model of counselor supervision. Includes continuously monitoring the trainee's behavior, affect, sensations, images, cognitions, interpersonal functioning, and when appropriate, biological functioning (diet and drugs) in the supervisory process. (LLL)

  3. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    NASA Astrophysics Data System (ADS)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  4. Improving Baseline Model Assumptions: Evaluating the Impacts of Typical Methodological Approaches in Watershed Models

    NASA Astrophysics Data System (ADS)

    Muenich, R. L.; Kalcic, M. M.; Teshager, A. D.; Long, C. M.; Wang, Y. C.; Scavia, D.

    2017-12-01

    Thanks to the availability of open-source software, online tutorials, and advanced software capabilities, watershed modeling has expanded its user-base and applications significantly in the past thirty years. Even complicated models like the Soil and Water Assessment Tool (SWAT) are being used and documented in hundreds of peer-reviewed publications each year, and likely more applied in practice. These models can help improve our understanding of present, past, and future conditions, or analyze important "what-if" management scenarios. However, baseline data and methods are often adopted and applied without rigorous testing. In multiple collaborative projects, we have evaluated the influence of some of these common approaches on model results. Specifically, we examined impacts of baseline data and assumptions involved in manure application, combined sewer overflows, and climate data incorporation across multiple watersheds in the Western Lake Erie Basin. In these efforts, we seek to understand the impact of using typical modeling data and assumptions, versus using improved data and enhanced assumptions on model outcomes and thus ultimately, study conclusions. We provide guidance for modelers as they adopt and apply data and models for their specific study region. While it is difficult to quantitatively assess the full uncertainty surrounding model input data and assumptions, recognizing the impacts of model input choices is important when considering actions at the both the field and watershed scales.

  5. SELECTION AND CALIBRATION OF SUBSURFACE REACTIVE TRANSPORT MODELS USING A SURROGATE-MODEL APPROACH

    EPA Science Inventory

    While standard techniques for uncertainty analysis have been successfully applied to groundwater flow models, extension to reactive transport is frustrated by numerous difficulties, including excessive computational burden and parameter non-uniqueness. This research introduces a...

  6. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    NASA Technical Reports Server (NTRS)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  7. How Polish Children Switch from One Case to Another when Using Novel Nouns: Challenges for Models of Inflectional Morphology

    ERIC Educational Resources Information Center

    Krajewski, Grzegorz; Theakston, Anna L.; Lieven, Elena V. M.; Tomasello, Michael

    2011-01-01

    The two main models of children's acquisition of inflectional morphology--the Dual-Mechanism approach and the usage-based (schema-based) approach--have both been applied mainly to languages with fairly simple morphological systems. Here we report two studies of 2-3-year-old Polish children's ability to generalise across case-inflectional endings…

  8. Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models

    ERIC Educational Resources Information Center

    Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George

    2012-01-01

    Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…

  9. An Activation-Based Model of Routine Sequence Errors

    DTIC Science & Technology

    2015-04-01

    part of the ACT-R frame- work (e.g., Anderson, 1983), we adopt a newer, richer no- tion of priming as part of our approach ( Harrison & Trafton, 2010...2014). Other models of routine sequence errors, such as the in- teractive activation network ( IAN ) model (Cooper & Shal- lice, 2006) and the simple...error patterns that results from an interface layout shift. The ideas behind our expanded priming approach, however, could apply to IAN , which uses

  10. Approaches for modeling within subject variability in pharmacometric count data analysis: dynamic inter-occasion variability and stochastic differential equations.

    PubMed

    Deng, Chenhui; Plan, Elodie L; Karlsson, Mats O

    2016-06-01

    Parameter variation in pharmacometric analysis studies can be characterized as within subject parameter variability (WSV) in pharmacometric models. WSV has previously been successfully modeled using inter-occasion variability (IOV), but also stochastic differential equations (SDEs). In this study, two approaches, dynamic inter-occasion variability (dIOV) and adapted stochastic differential equations, were proposed to investigate WSV in pharmacometric count data analysis. These approaches were applied to published count models for seizure counts and Likert pain scores. Both approaches improved the model fits significantly. In addition, stochastic simulation and estimation were used to explore further the capability of the two approaches to diagnose and improve models where existing WSV is not recognized. The results of simulations confirmed the gain in introducing WSV as dIOV and SDEs when parameters vary randomly over time. Further, the approaches were also informative as diagnostics of model misspecification, when parameters changed systematically over time but this was not recognized in the structural model. The proposed approaches in this study offer strategies to characterize WSV and are not restricted to count data.

  11. An individual-based approach to SIR epidemics in contact networks.

    PubMed

    Youssef, Mina; Scoglio, Caterina

    2011-08-21

    Many approaches have recently been proposed to model the spread of epidemics on networks. For instance, the Susceptible/Infected/Recovered (SIR) compartmental model has successfully been applied to different types of diseases that spread out among humans and animals. When this model is applied on a contact network, the centrality characteristics of the network plays an important role in the spreading process. However, current approaches only consider an aggregate representation of the network structure, which can result in inaccurate analysis. In this paper, we propose a new individual-based SIR approach, which considers the whole description of the network structure. The individual-based approach is built on a continuous time Markov chain, and it is capable of evaluating the state probability for every individual in the network. Through mathematical analysis, we rigorously confirm the existence of an epidemic threshold below which an epidemic does not propagate in the network. We also show that the epidemic threshold is inversely proportional to the maximum eigenvalue of the network. Additionally, we study the role of the whole spectrum of the network, and determine the relationship between the maximum number of infected individuals and the set of eigenvalues and eigenvectors. To validate our approach, we analytically study the deviation with respect to the continuous time Markov chain model, and we show that the new approach is accurate for a large range of infection strength. Furthermore, we compare the new approach with the well-known heterogeneous mean field approach in the literature. Ultimately, we support our theoretical results through extensive numerical evaluations and Monte Carlo simulations. Published by Elsevier Ltd.

  12. First-Principles Approach to Model Electrochemical Reactions: Understanding the Fundamental Mechanisms behind Mg Corrosion

    NASA Astrophysics Data System (ADS)

    Surendralal, Sudarsan; Todorova, Mira; Finnis, Michael W.; Neugebauer, Jörg

    2018-06-01

    Combining concepts of semiconductor physics and corrosion science, we develop a novel approach that allows us to perform ab initio calculations under controlled potentiostat conditions for electrochemical systems. The proposed approach can be straightforwardly applied in standard density functional theory codes. To demonstrate the performance and the opportunities opened by this approach, we study the chemical reactions that take place during initial corrosion at the water-Mg interface under anodic polarization. Based on this insight, we derive an atomistic model that explains the origin of the anodic hydrogen evolution.

  13. A General Interface Method for Aeroelastic Analysis of Aircraft

    NASA Technical Reports Server (NTRS)

    Tzong, T.; Chen, H. H.; Chang, K. C.; Wu, T.; Cebeci, T.

    1996-01-01

    The aeroelastic analysis of an aircraft requires an accurate and efficient procedure to couple aerodynamics and structures. The procedure needs an interface method to bridge the gap between the aerodynamic and structural models in order to transform loads and displacements. Such an interface method is described in this report. This interface method transforms loads computed by any aerodynamic code to a structural finite element (FE) model and converts the displacements from the FE model to the aerodynamic model. The approach is based on FE technology in which virtual work is employed to transform the aerodynamic pressures into FE nodal forces. The displacements at the FE nodes are then converted back to aerodynamic grid points on the aircraft surface through the reciprocal theorem in structural engineering. The method allows both high and crude fidelities of both models and does not require an intermediate modeling. In addition, the method performs the conversion of loads and displacements directly between individual aerodynamic grid point and its corresponding structural finite element and, hence, is very efficient for large aircraft models. This report also describes the application of this aero-structure interface method to a simple wing and an MD-90 wing. The results show that the aeroelastic effect is very important. For the simple wing, both linear and nonlinear approaches are used. In the linear approach, the deformation of the structural model is considered small, and the loads from the deformed aerodynamic model are applied to the original geometry of the structure. In the nonlinear approach, the geometry of the structure and its stiffness matrix are updated in every iteration and the increments of loads from the previous iteration are applied to the new structural geometry in order to compute the displacement increments. Additional studies to apply the aero-structure interaction procedure to more complicated geometry will be conducted in the second phase of the present contract.

  14. Enhanced semantic interoperability by profiling health informatics standards.

    PubMed

    López, Diego M; Blobel, Bernd

    2009-01-01

    Several standards applied to the healthcare domain support semantic interoperability. These standards are far from being completely adopted in health information system development, however. The objective of this paper is to provide a method and suggest the necessary tooling for reusing standard health information models, by that way supporting the development of semantically interoperable systems and components. The approach is based on the definition of UML Profiles. UML profiling is a formal modeling mechanism to specialize reference meta-models in such a way that it is possible to adapt those meta-models to specific platforms or domains. A health information model can be considered as such a meta-model. The first step of the introduced method identifies the standard health information models and tasks in the software development process in which healthcare information models can be reused. Then, the selected information model is formalized as a UML Profile. That Profile is finally applied to system models, annotating them with the semantics of the information model. The approach is supported on Eclipse-based UML modeling tools. The method is integrated into a comprehensive framework for health information systems development, and the feasibility of the approach is demonstrated in the analysis, design, and implementation of a public health surveillance system, reusing HL7 RIM and DIMs specifications. The paper describes a method and the necessary tooling for reusing standard healthcare information models. UML offers several advantages such as tooling support, graphical notation, exchangeability, extensibility, semi-automatic code generation, etc. The approach presented is also applicable for harmonizing different standard specifications.

  15. Importance of Personalized Health-Care Models: A Case Study in Activity Recognition.

    PubMed

    Zdravevski, Eftim; Lameski, Petre; Trajkovik, Vladimir; Pombo, Nuno; Garcia, Nuno

    2018-01-01

    Novel information and communication technologies create possibilities to change the future of health care. Ambient Assisted Living (AAL) is seen as a promising supplement of the current care models. The main goal of AAL solutions is to apply ambient intelligence technologies to enable elderly people to continue to live in their preferred environments. Applying trained models from health data is challenging because the personalized environments could differ significantly than the ones which provided training data. This paper investigates the effects on activity recognition accuracy using single accelerometer of personalized models compared to models built on general population. In addition, we propose a collaborative filtering based approach which provides balance between fully personalized models and generic models. The results show that the accuracy could be improved to 95% with fully personalized models, and up to 91.6% with collaborative filtering based models, which is significantly better than common models that exhibit accuracy of 85.1%. The collaborative filtering approach seems to provide highly personalized models with substantial accuracy, while overcoming the cold start problem that is common for fully personalized models.

  16. Likelihood-based gene annotations for gap filling and quality assessment in genome-scale metabolic models

    DOE PAGES

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...

    2014-10-16

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less

  17. Likelihood-Based Gene Annotations for Gap Filling and Quality Assessment in Genome-Scale Metabolic Models

    PubMed Central

    Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.

    2014-01-01

    Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157

  18. Mixture Modeling: Applications in Educational Psychology

    ERIC Educational Resources Information Center

    Harring, Jeffrey R.; Hodis, Flaviu A.

    2016-01-01

    Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…

  19. Maximizing Exposure Therapy: An Inhibitory Learning Approach

    PubMed Central

    Craske, Michelle G.; Treanor, Michael; Conway, Chris; Zbozinek, Tomislav; Vervliet, Bram

    2014-01-01

    Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a ‘fear habituation’ approach and ‘belief disconfirmation’ approach within standard cognitive-behavior therapy. Exposure optimization strategies include 1) expectancy violation, 2) deepened extinction, 3) occasional reinforced extinction, 4) removal of safety signals, 5) variability, 6) retrieval cues, 7) multiple contexts, and 8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. PMID:24864005

  20. An optimized data fusion method and its application to improve lateral boundary conditions in winter for Pearl River Delta regional PM2.5 modeling, China

    NASA Astrophysics Data System (ADS)

    Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran

    2018-05-01

    Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.

  1. NAPR: a Cloud-Based Framework for Neuroanatomical Age Prediction.

    PubMed

    Pardoe, Heath R; Kuzniecky, Ruben

    2018-01-01

    The availability of cloud computing services has enabled the widespread adoption of the "software as a service" (SaaS) approach for software distribution, which utilizes network-based access to applications running on centralized servers. In this paper we apply the SaaS approach to neuroimaging-based age prediction. Our system, named "NAPR" (Neuroanatomical Age Prediction using R), provides access to predictive modeling software running on a persistent cloud-based Amazon Web Services (AWS) compute instance. The NAPR framework allows external users to estimate the age of individual subjects using cortical thickness maps derived from their own locally processed T1-weighted whole brain MRI scans. As a demonstration of the NAPR approach, we have developed two age prediction models that were trained using healthy control data from the ABIDE, CoRR, DLBS and NKI Rockland neuroimaging datasets (total N = 2367, age range 6-89 years). The provided age prediction models were trained using (i) relevance vector machines and (ii) Gaussian processes machine learning methods applied to cortical thickness surfaces obtained using Freesurfer v5.3. We believe that this transparent approach to out-of-sample evaluation and comparison of neuroimaging age prediction models will facilitate the development of improved age prediction models and allow for robust evaluation of the clinical utility of these methods.

  2. Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport

    NASA Astrophysics Data System (ADS)

    Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.

    2016-12-01

    Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wykes, M., E-mail: mikewykes@gmail.com; Parambil, R.; Gierschner, J.

    Here, we present a general approach to treating vibronic coupling in molecular crystals based on atomistic simulations of large clusters. Such clusters comprise model aggregates treated at the quantum chemical level embedded within a realistic environment treated at the molecular mechanics level. As we calculate ground and excited state equilibrium geometries and vibrational modes of model aggregates, our approach is able to capture effects arising from coupling to intermolecular degrees of freedom, absent from existing models relying on geometries and normal modes of single molecules. Using the geometries and vibrational modes of clusters, we are able to simulate the fluorescencemore » spectra of aggregates for which the lowest excited state bears negligible oscillator strength (as is the case, e.g., ideal H-aggregates) by including both Franck-Condon (FC) and Herzberg-Teller (HT) vibronic transitions. The latter terms allow the adiabatic excited state of the cluster to couple with vibrations in a perturbative fashion via derivatives of the transition dipole moment along nuclear coordinates. While vibronic coupling simulations employing FC and HT terms are well established for single-molecules, to our knowledge this is the first time they are applied to molecular aggregates. Here, we apply this approach to the simulation of the low-temperature fluorescence spectrum of para-distyrylbenzene single-crystal H-aggregates and draw comparisons with coarse-grained Frenkel-Holstein approaches previously extensively applied to such systems.« less

  4. Congruence analysis of geodetic networks - hypothesis tests versus model selection by information criteria

    NASA Astrophysics Data System (ADS)

    Lehmann, Rüdiger; Lösler, Michael

    2017-12-01

    Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.

  5. Team Approach to Staffing the Reference Center: A Speculation.

    ERIC Educational Resources Information Center

    Lawson, Mollie D.; And Others

    This document applies theories of participatory management to a proposal for a model that uses a team approach to staffing university library reference centers. In particular, the Ward Edwards Library at Central Missouri State University is examined in terms of the advantages and disadvantages of its current approach. Special attention is given to…

  6. Hedonic approaches based on spatial econometrics and spatial statistics: application to evaluation of project benefits

    NASA Astrophysics Data System (ADS)

    Tsutsumi, Morito; Seya, Hajime

    2009-12-01

    This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.

  7. NASA Satellite Data for Seagrass Health Modeling and Monitoring

    NASA Technical Reports Server (NTRS)

    Spiering, Bruce A.; Underwood, Lauren; Ross, Kenton

    2011-01-01

    Time series derived information for coastal waters will be used to provide input data for the Fong and Harwell model. The current MODIS land mask limits where the model can be applied; this project will: a) Apply MODIS data with resolution higher than the standard products (250-m vs. 1-km). b) Seek to refine the land mask. c) Explore nearby areas to use as proxies for time series directly over the beds. Novel processing approaches will be leveraged from other NASA projects and customized as inputs for seagrass productivity modeling

  8. Human factors systems approach to healthcare quality and patient safety

    PubMed Central

    Carayon, Pascale; Wetterneck, Tosha B.; Rivera-Rodriguez, A. Joy; Hundt, Ann Schoofs; Hoonakker, Peter; Holden, Richard; Gurses, Ayse P.

    2013-01-01

    Human factors systems approaches are critical for improving healthcare quality and patient safety. The SEIPS (Systems Engineering Initiative for Patient Safety) model of work system and patient safety is a human factors systems approach that has been successfully applied in healthcare research and practice. Several research and practical applications of the SEIPS model are described. Important implications of the SEIPS model for healthcare system and process redesign are highlighted. Principles for redesigning healthcare systems using the SEIPS model are described. Balancing the work system and encouraging the active and adaptive role of workers are key principles for improving healthcare quality and patient safety. PMID:23845724

  9. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1992-01-01

    The difficulty of developing reliable distribution software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems that are substantially easier to develop, exploit sophisticated forms of cooperative computation, and achieve high reliability. Six years of research on ISIS, describing the model, its implementation challenges, and the types of applications to which ISIS has been applied are reviewed.

  10. Writing and reading: connections between language by hand and language by eye.

    PubMed

    Berninger, Virginia W; Abbott, Robert D; Abbott, Sylvia P; Graham, Steve; Richards, Todd

    2002-01-01

    Four approaches to the investigation of connections between language by hand and language by eye are described and illustrated with studies from a decade-long research program. In the first approach, multigroup structural equation modeling is applied to reading and writing measures given to typically developing writers to examine unidirectional and bidirectional relationships between specific components of the reading and writing systems. In the second approach, structural equation modeling is applied to a multivariate set of language measures given to children and adults with reading and writing disabilities to examine how the same set of language processes is orchestrated differently to accomplish specific reading or writing goals, and correlations between factors are evaluated to examine the level at which the language-by-hand system and the language-by-eye system communicate most easily. In the third approach, mode of instruction and mode of response are systematically varied in evaluating effectiveness of treating reading disability with and without a writing component. In the fourth approach, functional brain imaging is used to investigate residual spelling problems in students whose problems with word decoding have been remediated. The four approaches support a model in which language by hand and language by eye are separate systems that interact in predictable ways.

  11. Dynamic, stochastic models for congestion pricing and congestion securities.

    DOT National Transportation Integrated Search

    2010-12-01

    This research considers congestion pricing under demand uncertainty. In particular, a robust optimization (RO) approach is applied to optimal congestion pricing problems under user equilibrium. A mathematical model is developed and an analysis perfor...

  12. A Feasibility Study of Synthesizing Subsurfaces Modeled with Computational Neural Networks

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Housner, Jerrold M.; Szewczyk, Z. Peter

    1998-01-01

    This paper investigates the feasibility of synthesizing substructures modeled with computational neural networks. Substructures are modeled individually with computational neural networks and the response of the assembled structure is predicted by synthesizing the neural networks. A superposition approach is applied to synthesize models for statically determinate substructures while an interface displacement collocation approach is used to synthesize statically indeterminate substructure models. Beam and plate substructures along with components of a complicated Next Generation Space Telescope (NGST) model are used in this feasibility study. In this paper, the limitations and difficulties of synthesizing substructures modeled with neural networks are also discussed.

  13. Automation of Endmember Pixel Selection in SEBAL/METRIC Model

    NASA Astrophysics Data System (ADS)

    Bhattarai, N.; Quackenbush, L. J.; Im, J.; Shaw, S. B.

    2015-12-01

    The commonly applied surface energy balance for land (SEBAL) and its variant, mapping evapotranspiration (ET) at high resolution with internalized calibration (METRIC) models require manual selection of endmember (i.e. hot and cold) pixels to calibrate sensible heat flux. Current approaches for automating this process are based on statistical methods and do not appear to be robust under varying climate conditions and seasons. In this paper, we introduce a new approach based on simple machine learning tools and search algorithms that provides an automatic and time efficient way of identifying endmember pixels for use in these models. The fully automated models were applied on over 100 cloud-free Landsat images with each image covering several eddy covariance flux sites in Florida and Oklahoma. Observed land surface temperatures at automatically identified hot and cold pixels were within 0.5% of those from pixels manually identified by an experienced operator (coefficient of determination, R2, ≥ 0.92, Nash-Sutcliffe efficiency, NSE, ≥ 0.92, and root mean squared error, RMSE, ≤ 1.67 K). Daily ET estimates derived from the automated SEBAL and METRIC models were in good agreement with their manual counterparts (e.g., NSE ≥ 0.91 and RMSE ≤ 0.35 mm day-1). Automated and manual pixel selection resulted in similar estimates of observed ET across all sites. The proposed approach should reduce time demands for applying SEBAL/METRIC models and allow for their more widespread and frequent use. This automation can also reduce potential bias that could be introduced by an inexperienced operator and extend the domain of the models to new users.

  14. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  15. Dynamics and control of quadcopter using linear model predictive control approach

    NASA Astrophysics Data System (ADS)

    Islam, M.; Okasha, M.; Idres, M. M.

    2017-12-01

    This paper investigates the dynamics and control of a quadcopter using the Model Predictive Control (MPC) approach. The dynamic model is of high fidelity and nonlinear, with six degrees of freedom that include disturbances and model uncertainties. The control approach is developed based on MPC to track different reference trajectories ranging from simple ones such as circular to complex helical trajectories. In this control technique, a linearized model is derived and the receding horizon method is applied to generate the optimal control sequence. Although MPC is computer expensive, it is highly effective to deal with the different types of nonlinearities and constraints such as actuators’ saturation and model uncertainties. The MPC parameters (control and prediction horizons) are selected by trial-and-error approach. Several simulation scenarios are performed to examine and evaluate the performance of the proposed control approach using MATLAB and Simulink environment. Simulation results show that this control approach is highly effective to track a given reference trajectory.

  16. Hourly runoff forecasting for flood risk management: Application of various computational intelligence models

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2015-10-01

    Reliable river flow forecasts play a key role in flood risk mitigation. Among different approaches of river flow forecasting, data driven approaches have become increasingly popular in recent years due to their minimum information requirements and ability to simulate nonlinear and non-stationary characteristics of hydrological processes. In this study, attempts are made to apply four different types of data driven approaches, namely traditional artificial neural networks (ANN), adaptive neuro-fuzzy inference systems (ANFIS), wavelet neural networks (WNN), and, hybrid ANFIS with multi resolution analysis using wavelets (WNF). Developed models applied for real time flood forecasting at Casino station on Richmond River, Australia which is highly prone to flooding. Hourly rainfall and runoff data were used to drive the models which have been used for forecasting with 1, 6, 12, 24, 36 and 48 h lead-time. The performance of models further improved by adding an upstream river flow data (Wiangaree station), as another effective input. All models perform satisfactorily up to 12 h lead-time. However, the hybrid wavelet-based models significantly outperforming the ANFIS and ANN models in the longer lead-time forecasting. The results confirm the robustness of the proposed structure of the hybrid models for real time runoff forecasting in the study area.

  17. Design and analysis of simple choice surveys for natural resource management

    USGS Publications Warehouse

    Fieberg, John; Cornicelli, Louis; Fulton, David C.; Grund, Marrett D.

    2010-01-01

    We used a simple yet powerful method for judging public support for management actions from randomized surveys. We asked respondents to rank choices (representing management regulations under consideration) according to their preference, and we then used discrete choice models to estimate probability of choosing among options (conditional on the set of options presented to respondents). Because choices may share similar unmodeled characteristics, the multinomial logit model, commonly applied to discrete choice data, may not be appropriate. We introduced the nested logit model, which offers a simple approach for incorporating correlation among choices. This forced choice survey approach provides a useful method of gathering public input; it is relatively easy to apply in practice, and the data are likely to be more informative than asking constituents to rate attractiveness of each option separately.

  18. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  19. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  20. Orion Flight Test 1 Architecture: Observed Benefits of a Model Based Engineering Approach

    NASA Technical Reports Server (NTRS)

    Simpson, Kimberly A.; Sindiy, Oleg V.; McVittie, Thomas I.

    2012-01-01

    This paper details how a NASA-led team is using a model-based systems engineering approach to capture, analyze and communicate the end-to-end information system architecture supporting the first unmanned orbital flight of the Orion Multi-Purpose Crew Exploration Vehicle. Along with a brief overview of the approach and its products, the paper focuses on the observed program-level benefits, challenges, and lessons learned; all of which may be applied to improve system engineering tasks for characteristically similarly challenges

  1. A method to efficiently apply a biogeochemical model to a landscape.

    Treesearch

    Robert E. Kennedy; David P. Turner; Warren B. Cohen; Michael Guzy

    2006-01-01

    Biogeochemical models offer an important means of understanding carbon dynamics, but the computational complexity of many models means that modeling all grid cells on a large landscape is computationally burdensome. Because most biogeochemical models ignore adjacency effects between cells, however, a more efficient approach is possible. Recognizing that spatial...

  2. Application of a coupled smoothed particle hydrodynamics (SPH) and coarse-grained (CG) numerical modelling approach to study three-dimensional (3-D) deformations of single cells of different food-plant materials during drying.

    PubMed

    Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T

    2018-03-14

    Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.

  3. Is Dysfunctional Use of the Mobile Phone a Behavioural Addiction? Confronting Symptom-Based Versus Process-Based Approaches.

    PubMed

    Billieux, Joël; Philippot, Pierre; Schmid, Cécile; Maurage, Pierre; De Mol, Jan; Van der Linden, Martial

    2015-01-01

    Dysfunctional use of the mobile phone has often been conceptualized as a 'behavioural addiction' that shares most features with drug addictions. In the current article, we challenge the clinical utility of the addiction model as applied to mobile phone overuse. We describe the case of a woman who overuses her mobile phone from two distinct approaches: (1) a symptom-based categorical approach inspired from the addiction model of dysfunctional mobile phone use and (2) a process-based approach resulting from an idiosyncratic clinical case conceptualization. In the case depicted here, the addiction model was shown to lead to standardized and non-relevant treatment, whereas the clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific, empirically based psychological interventions. This finding highlights that conceptualizing excessive behaviours (e.g., gambling and sex) within the addiction model can be a simplification of an individual's psychological functioning, offering only limited clinical relevance. The addiction model, applied to excessive behaviours (e.g., gambling, sex and Internet-related activities) may lead to non-relevant standardized treatments. Clinical case conceptualization allowed identification of specific psychological processes that can be targeted with specific empirically based psychological interventions. The biomedical model might lead to the simplification of an individual's psychological functioning with limited clinical relevance. Copyright © 2014 John Wiley & Sons, Ltd.

  4. Management strategy evaluation of pheromone-baited trapping techniques to improve management of invasive sea lamprey

    USGS Publications Warehouse

    Dawson, Heather; Jones, Michael L.; Irwin, Brian J.; Johnson, Nicholas; Wagner, Michael C.; Szymanski, Melissa

    2016-01-01

    We applied a management strategy evaluation (MSE) model to examine the potential cost-effectiveness of using pheromone-baited trapping along with conventional lampricide treatment to manage invasive sea lamprey. Four pheromone-baited trapping strategies were modeled: (1) stream activation wherein pheromone was applied to existing traps to achieve 10−12 mol/L in-stream concentration, (2) stream activation plus two additional traps downstream with pheromone applied at 2.5 mg/hr (reverse-intercept approach), (3) trap activation wherein pheromone was applied at 10 mg/hr to existing traps, and (4) trap activation and reverse-intercept approach. Each new strategy was applied, with remaining funds applied to conventional lampricide control. Simulating deployment of these hybrid strategies on fourteen Lake Michigan streams resulted in increases of 17 and 11% (strategies 1 and 2) and decreases of 4 and 7% (strategies 3 and 4) of the lakewide mean abundance of adult sea lamprey relative to status quo. MSE revealed performance targets for trap efficacy to guide additional research because results indicate that combining lampricides and high efficacy trapping technologies can reduce sea lamprey abundance on average without increasing control costs.

  5. How Qualitative Methods Can be Used to Inform Model Development.

    PubMed

    Husbands, Samantha; Jowett, Susan; Barton, Pelham; Coast, Joanna

    2017-06-01

    Decision-analytic models play a key role in informing healthcare resource allocation decisions. However, there are ongoing concerns with the credibility of models. Modelling methods guidance can encourage good practice within model development, but its value is dependent on its ability to address the areas that modellers find most challenging. Further, it is important that modelling methods and related guidance are continually updated in light of any new approaches that could potentially enhance model credibility. The objective of this article was to highlight the ways in which qualitative methods have been used and recommended to inform decision-analytic model development and enhance modelling practices. With reference to the literature, the article discusses two key ways in which qualitative methods can be, and have been, applied. The first approach involves using qualitative methods to understand and inform general and future processes of model development, and the second, using qualitative techniques to directly inform the development of individual models. The literature suggests that qualitative methods can improve the validity and credibility of modelling processes by providing a means to understand existing modelling approaches that identifies where problems are occurring and further guidance is needed. It can also be applied within model development to facilitate the input of experts to structural development. We recommend that current and future model development would benefit from the greater integration of qualitative methods, specifically by studying 'real' modelling processes, and by developing recommendations around how qualitative methods can be adopted within everyday modelling practice.

  6. Model-free methods to study membrane environmental probes: a comparison of the spectral phasor and generalized polarization approaches

    PubMed Central

    Malacrida, Leonel; Gratton, Enrico; Jameson, David M

    2016-01-01

    In this note, we present a discussion of the advantages and scope of model-free analysis methods applied to the popular solvatochromic probe LAURDAN, which is widely used as an environmental probe to study dynamics and structure in membranes. In particular, we compare and contrast the generalized polarization approach with the spectral phasor approach. To illustrate our points we utilize several model membrane systems containing pure lipid phases and, in some cases, cholesterol or surfactants. We demonstrate that the spectral phasor method offers definitive advantages in the case of complex systems. PMID:27182438

  7. Investigating decoherence in a simple system

    NASA Technical Reports Server (NTRS)

    Albrecht, Andreas

    1991-01-01

    The results of some simple calculations designed to study quantum decoherence are presented. The physics of quantum decoherence are briefly reviewed, and a very simple 'toy' model is analyzed. Exact solutions are found using numerical techniques. The type of incoherence exhibited by the model can be changed by varying a coupling strength. The author explains why the conventional approach to studying decoherence by checking the diagonality of the density matrix is not always adequate. Two other approaches, the decoherence functional and the Schmidt paths approach, are applied to the toy model and contrasted to each other. Possible problems with each are discussed.

  8. Improving Measurement in Health Education and Health Behavior Research Using Item Response Modeling: Comparison with the Classical Test Theory Approach

    ERIC Educational Resources Information Center

    Wilson, Mark; Allen, Diane D.; Li, Jun Corser

    2006-01-01

    This paper compares the approach and resultant outcomes of item response models (IRMs) and classical test theory (CTT). First, it reviews basic ideas of CTT, and compares them to the ideas about using IRMs introduced in an earlier paper. It then applies a comparison scheme based on the AERA/APA/NCME "Standards for Educational and…

  9. Porous Media Approach for Modeling Closed Cell Foam

    NASA Technical Reports Server (NTRS)

    Ghosn, Louis J.; Sullivan, Roy M.

    2006-01-01

    In order to minimize boil off of the liquid oxygen and liquid hydrogen and to prevent the formation of ice on its exterior surface, the Space Shuttle External Tank (ET) is insulated using various low-density, closed-cell polymeric foams. Improved analysis methods for these foam materials are needed to predict the foam structural response and to help identify the foam fracture behavior in order to help minimize foam shedding occurrences. This presentation describes a continuum based approach to modeling the foam thermo-mechanical behavior that accounts for the cellular nature of the material and explicitly addresses the effect of the internal cell gas pressure. A porous media approach is implemented in a finite element frame work to model the mechanical behavior of the closed cell foam. The ABAQUS general purpose finite element program is used to simulate the continuum behavior of the foam. The soil mechanics element is implemented to account for the cell internal pressure and its effect on the stress and strain fields. The pressure variation inside the closed cells is calculated using the ideal gas laws. The soil mechanics element is compatible with an orthotropic materials model to capture the different behavior between the rise and in-plane directions of the foam. The porous media approach is applied to model the foam thermal strain and calculate the foam effective coefficient of thermal expansion. The calculated foam coefficients of thermal expansion were able to simulate the measured thermal strain during heat up from cryogenic temperature to room temperature in vacuum. The porous media approach was applied to an insulated substrate with one inch foam and compared to a simple elastic solution without pore pressure. The porous media approach is also applied to model the foam mechanical behavior during subscale laboratory experiments. In this test, a foam layer sprayed on a metal substrate is subjected to a temperature variation while the metal substrate is stretched to simulate the structural response of the tank during operation. The thermal expansion mismatch between the foam and the metal substrate and the thermal gradient in the foam layer causes high tensile stresses near the metal/foam interface that can lead to delamination.

  10. Locomotion Dynamics for Bio-inspired Robots with Soft Appendages: Application to Flapping Flight and Passive Swimming

    NASA Astrophysics Data System (ADS)

    Boyer, Frédéric; Porez, Mathieu; Morsli, Ferhat; Morel, Yannick

    2017-08-01

    In animal locomotion, either in fish or flying insects, the use of flexible terminal organs or appendages greatly improves the performance of locomotion (thrust and lift). In this article, we propose a general unified framework for modeling and simulating the (bio-inspired) locomotion of robots using soft organs. The proposed approach is based on the model of Mobile Multibody Systems (MMS). The distributed flexibilities are modeled according to two major approaches: the Floating Frame Approach (FFA) and the Geometrically Exact Approach (GEA). Encompassing these two approaches in the Newton-Euler modeling formalism of robotics, this article proposes a unique modeling framework suited to the fast numerical integration of the dynamics of a MMS in both the FFA and the GEA. This general framework is applied on two illustrative examples drawn from bio-inspired locomotion: the passive swimming in von Karman Vortex Street, and the hovering flight with flexible flapping wings.

  11. Systematic review and overview of health economic evaluation models in obesity prevention and therapy.

    PubMed

    Schwander, Bjoern; Hiligsmann, Mickaël; Nuijten, Mark; Evers, Silvia

    2016-10-01

    Given the increasing clinical and economic burden of obesity, it is of major importance to identify cost-effective approaches for obesity management. Areas covered: This study aims to systematically review and compile an overview of published decision models for health economic assessments (HEA) in obesity, in order to summarize and compare their key characteristics as well as to identify, inform and guide future research. Of the 4,293 abstracts identified, 87 papers met our inclusion criteria. A wide range of different methodological approaches have been identified. Of the 87 papers, 69 (79%) applied unique /distinctive modelling approaches. Expert commentary: This wide range of approaches suggests the need to develop recommendations /minimal requirements for model-based HEA of obesity. In order to reach this long-term goal, further research is required. Valuable future research steps would be to investigate the predictiveness, validity and quality of the identified modelling approaches.

  12. Agent-based modeling: a new approach for theory building in social psychology.

    PubMed

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  13. A computational model-based validation of Guyton's analysis of cardiac output and venous return curves

    NASA Technical Reports Server (NTRS)

    Mukkamala, R.; Cohen, R. J.; Mark, R. G.

    2002-01-01

    Guyton developed a popular approach for understanding the factors responsible for cardiac output (CO) regulation in which 1) the heart-lung unit and systemic circulation are independently characterized via CO and venous return (VR) curves, and 2) average CO and right atrial pressure (RAP) of the intact circulation are predicted by graphically intersecting the curves. However, this approach is virtually impossible to verify experimentally. We theoretically evaluated the approach with respect to a nonlinear, computational model of the pulsatile heart and circulation. We developed two sets of open circulation models to generate CO and VR curves, differing by the manner in which average RAP was varied. One set applied constant RAPs, while the other set applied pulsatile RAPs. Accurate prediction of intact, average CO and RAP was achieved only by intersecting the CO and VR curves generated with pulsatile RAPs because of the pulsatility and nonlinearity (e.g., systemic venous collapse) of the intact model. The CO and VR curves generated with pulsatile RAPs were also practically independent. This theoretical study therefore supports the validity of Guyton's graphical analysis.

  14. Applying Emax model and bivariate thin plate splines to assess drug interactions

    PubMed Central

    Kong, Maiying; Lee, J. Jack

    2014-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95% point-wise confidence interval as well as its 95% simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies. PMID:20036878

  15. Applying Emax model and bivariate thin plate splines to assess drug interactions.

    PubMed

    Kong, Maiying; Lee, J Jack

    2010-01-01

    We review the semiparametric approach previously proposed by Kong and Lee and extend it to a case in which the dose-effect curves follow the Emax model instead of the median effect equation. When the maximum effects for the investigated drugs are different, we provide a procedure to obtain the additive effect based on the Loewe additivity model. Then, we apply a bivariate thin plate spline approach to estimate the effect beyond additivity along with its 95 per cent point-wise confidence interval as well as its 95 per cent simultaneous confidence interval for any combination dose. Thus, synergy, additivity, and antagonism can be identified. The advantages of the method are that it provides an overall assessment of the combination effect on the entire two-dimensional dose space spanned by the experimental doses, and it enables us to identify complex patterns of drug interaction in combination studies. In addition, this approach is robust to outliers. To illustrate this procedure, we analyzed data from two case studies.

  16. Human Reliability Analysis in Support of Risk Assessment for Positive Train Control

    DOT National Transportation Integrated Search

    2003-06-01

    This report describes an approach to evaluating the reliability of human actions that are modeled in a probabilistic risk assessment : (PRA) of train control operations. This approach to human reliability analysis (HRA) has been applied in the case o...

  17. THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY: AN EXPANDED VIEW OF CHEMICAL TOXICITY

    EPA Science Inventory

    A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. T...

  18. Hierarchical relaxation methods for multispectral pixel classification as applied to target identification

    NASA Astrophysics Data System (ADS)

    Cohen, E. A., Jr.

    1985-02-01

    This report provides insights into the approaches toward image modeling as applied to target detection. The approach is that of examining the energy in prescribed wave-bands which emanate from a target and correlating the emissions. Typically, one might be looking at two or three infrared bands, possibly together with several visual bands. The target is segmented, using both first and second order modeling, into a set of interesting components and these components are correlated so as to enhance the classification process. A Markov-type model is used to provide an a priori assessment of the spatial relationships among critical parts of the target, and a stochastic model using the output of an initial probabilistic labeling is invoked. The tradeoff between this stochastic model and the Markov model is then optimized to yield a best labeling for identification purposes. In an identification of friend or foe (IFF) context, this methodology could be of interest, for it provides the ingredients for such a higher level of understanding.

  19. Use of statistical and neural net approaches in predicting toxicity of chemicals.

    PubMed

    Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D

    2000-01-01

    Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.

  20. Modelling leaf photosynthetic and transpiration temperature-dependent responses in Vitis vinifera cv. Semillon grapevines growing in hot, irrigated vineyard conditions

    PubMed Central

    Greer, Dennis H.

    2012-01-01

    Background and aims Grapevines growing in Australia are often exposed to very high temperatures and the question of how the gas exchange processes adjust to these conditions is not well understood. The aim was to develop a model of photosynthesis and transpiration in relation to temperature to quantify the impact of the growing conditions on vine performance. Methodology Leaf gas exchange was measured along the grapevine shoots in accordance with their growth and development over several growing seasons. Using a general linear statistical modelling approach, photosynthesis and transpiration were modelled against leaf temperature separated into bands and the model parameters and coefficients applied to independent datasets to validate the model. Principal results Photosynthesis, transpiration and stomatal conductance varied along the shoot, with early emerging leaves having the highest rates, but these declined as later emerging leaves increased their gas exchange capacities in accordance with development. The general linear modelling approach applied to these data revealed that photosynthesis at each temperature was additively dependent on stomatal conductance, internal CO2 concentration and photon flux density. The temperature-dependent coefficients for these parameters applied to other datasets gave a predicted rate of photosynthesis that was linearly related to the measured rates, with a 1 : 1 slope. Temperature-dependent transpiration was multiplicatively related to stomatal conductance and the leaf to air vapour pressure deficit and applying the coefficients also showed a highly linear relationship, with a 1 : 1 slope between measured and modelled rates, when applied to independent datasets. Conclusions The models developed for the grapevines were relatively simple but accounted for much of the seasonal variation in photosynthesis and transpiration. The goodness of fit in each case demonstrated that explicitly selecting leaf temperature as a model parameter, rather than including temperature intrinsically as is usually done in more complex models, was warranted. PMID:22567220

  1. An integrative top-down and bottom-up qualitative model construction framework for exploration of biochemical systems.

    PubMed

    Wu, Zujian; Pang, Wei; Coghill, George M

    Computational modelling of biochemical systems based on top-down and bottom-up approaches has been well studied over the last decade. In this research, after illustrating how to generate atomic components by a set of given reactants and two user pre-defined component patterns, we propose an integrative top-down and bottom-up modelling approach for stepwise qualitative exploration of interactions among reactants in biochemical systems. Evolution strategy is applied to the top-down modelling approach to compose models, and simulated annealing is employed in the bottom-up modelling approach to explore potential interactions based on models constructed from the top-down modelling process. Both the top-down and bottom-up approaches support stepwise modular addition or subtraction for the model evolution. Experimental results indicate that our modelling approach is feasible to learn the relationships among biochemical reactants qualitatively. In addition, hidden reactants of the target biochemical system can be obtained by generating complex reactants in corresponding composed models. Moreover, qualitatively learned models with inferred reactants and alternative topologies can be used for further web-lab experimental investigations by biologists of interest, which may result in a better understanding of the system.

  2. Cost-of-illness studies based on massive data: a prevalence-based, top-down regression approach.

    PubMed

    Stollenwerk, Björn; Welchowski, Thomas; Vogl, Matthias; Stock, Stephanie

    2016-04-01

    Despite the increasing availability of routine data, no analysis method has yet been presented for cost-of-illness (COI) studies based on massive data. We aim, first, to present such a method and, second, to assess the relevance of the associated gain in numerical efficiency. We propose a prevalence-based, top-down regression approach consisting of five steps: aggregating the data; fitting a generalized additive model (GAM); predicting costs via the fitted GAM; comparing predicted costs between prevalent and non-prevalent subjects; and quantifying the stochastic uncertainty via error propagation. To demonstrate the method, it was applied to aggregated data in the context of chronic lung disease to German sickness funds data (from 1999), covering over 7.3 million insured. To assess the gain in numerical efficiency, the computational time of the innovative approach has been compared with corresponding GAMs applied to simulated individual-level data. Furthermore, the probability of model failure was modeled via logistic regression. Applying the innovative method was reasonably fast (19 min). In contrast, regarding patient-level data, computational time increased disproportionately by sample size. Furthermore, using patient-level data was accompanied by a substantial risk of model failure (about 80 % for 6 million subjects). The gain in computational efficiency of the innovative COI method seems to be of practical relevance. Furthermore, it may yield more precise cost estimates.

  3. Monitoring tooth profile faults in epicyclic gearboxes using synchronously averaged motor currents: Mathematical modeling and experimental validation

    NASA Astrophysics Data System (ADS)

    Ottewill, J. R.; Ruszczyk, A.; Broda, D.

    2017-02-01

    Time-varying transmission paths and inaccessibility can increase the difficulty in both acquiring and processing vibration signals for the purpose of monitoring epicyclic gearboxes. Recent work has shown that the synchronous signal averaging approach may be applied to measured motor currents in order to diagnose tooth faults in parallel shaft gearboxes. In this paper we further develop the approach, so that it may also be applied to monitor tooth faults in epicyclic gearboxes. A low-degree-of-freedom model of an epicyclic gearbox which incorporates the possibility of simulating tooth faults, as well as any subsequent tooth contact loss due to these faults, is introduced. By combining this model with a simple space-phasor model of an induction motor it is possible to show that, in theory, tooth faults in epicyclic gearboxes may be identified from motor currents. Applying the synchronous averaging approach to experimentally recorded motor currents and angular displacements recorded from a shaft mounted encoder, validate this finding. Comparison between experiments and theory highlight the influence of operating conditions, backlash and shaft couplings on the transient response excited in the currents by the tooth fault. The results obtained suggest that the method may be a viable alternative or complement to more traditional methods for monitoring gearboxes. However, general observations also indicate that further investigations into the sensitivity and robustness of the method would be beneficial.

  4. A high speed model-based approach for wavefront sensorless adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing

    2018-02-01

    To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).

  5. Defining and Applying a Functionality Approach to Intellectual Disability

    ERIC Educational Resources Information Center

    Luckasson, R.; Schalock, R. L.

    2013-01-01

    Background: The current functional models of disability do not adequately incorporate significant changes of the last three decades in our understanding of human functioning, and how the human functioning construct can be applied to clinical functions, professional practices and outcomes evaluation. Methods: The authors synthesise current…

  6. Populations, Natural Selection, and Applied Organizational Science.

    ERIC Educational Resources Information Center

    McKelvey, Bill; Aldrich, Howard

    1983-01-01

    Deficiencies in existing models in organizational science may be remedied by applying the population approach, with its concepts of taxonomy, classification, evolution, and population ecology; and natural selection theory, with its principles of variation, natural selection, heredity, and struggle for existence, to the idea of organizational forms…

  7. Applied Computational Chemistry for the Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Wedler, Henry B.; Cohen, Sarah R.; Davis, Rebecca L.; Harrison, Jason G.; Siebert, Matthew R.; Willenbring, Dan; Hamann, Christian S.; Shaw, Jared T.; Tantillo, Dean J.

    2012-01-01

    We describe accommodations that we have made to our applied computational-theoretical chemistry laboratory to provide access for blind and visually impaired students interested in independent investigation of structure-function relationships. Our approach utilizes tactile drawings, molecular model kits, existing software, Bash and Perl scripts…

  8. Joyful Learning in Kindergarten. Revised Edition.

    ERIC Educational Resources Information Center

    Fisher, Bobbi

    Applying the conditions of natural learning to create caring kindergarten classroom environments may support students as lifelong learners. This book presents a natural learning classroom model for implementing a whole-language approach in kindergarten. The chapters are as follows: (1) "My Beliefs about How Children Learn"; (2) "Applying Whole…

  9. Linking Goal-Oriented Requirements and Model-Driven Development

    NASA Astrophysics Data System (ADS)

    Pastor, Oscar; Giachetti, Giovanni

    In the context of Goal-Oriented Requirement Engineering (GORE) there are interesting modeling approaches for the analysis of complex scenarios that are oriented to obtain and represent the relevant requirements for the development of software products. However, the way to use these GORE models in an automated Model-Driven Development (MDD) process is not clear, and, in general terms, the translation of these models into the final software products is still manually performed. Therefore, in this chapter, we show an approach to automatically link GORE models and MDD processes, which has been elaborated by considering the experience obtained from linking the i * framework with an industrially applied MDD approach. The linking approach proposed is formulated by means of a generic process that is based on current modeling standards and technologies in order to facilitate its application for different MDD and GORE approaches. Special attention is paid to how this process generates appropriate model transformation mechanisms to automatically obtain MDD conceptual models from GORE models, and how it can be used to specify validation mechanisms to assure the correct model transformations.

  10. Human systems immunology: hypothesis-based modeling and unbiased data-driven approaches.

    PubMed

    Arazi, Arnon; Pendergraft, William F; Ribeiro, Ruy M; Perelson, Alan S; Hacohen, Nir

    2013-10-31

    Systems immunology is an emerging paradigm that aims at a more systematic and quantitative understanding of the immune system. Two major approaches have been utilized to date in this field: unbiased data-driven modeling to comprehensively identify molecular and cellular components of a system and their interactions; and hypothesis-based quantitative modeling to understand the operating principles of a system by extracting a minimal set of variables and rules underlying them. In this review, we describe applications of the two approaches to the study of viral infections and autoimmune diseases in humans, and discuss possible ways by which these two approaches can synergize when applied to human immunology. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    NASA Astrophysics Data System (ADS)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  12. Identifying Interacting Genetic Variations by Fish-Swarm Logic Regression

    PubMed Central

    Yang, Aiyuan; Yan, Chunxia; Zhu, Feng; Zhao, Zhongmeng; Cao, Zhi

    2013-01-01

    Understanding associations between genotypes and complex traits is a fundamental problem in human genetics. A major open problem in mapping phenotypes is that of identifying a set of interacting genetic variants, which might contribute to complex traits. Logic regression (LR) is a powerful multivariant association tool. Several LR-based approaches have been successfully applied to different datasets. However, these approaches are not adequate with regard to accuracy and efficiency. In this paper, we propose a new LR-based approach, called fish-swarm logic regression (FSLR), which improves the logic regression process by incorporating swarm optimization. In our approach, a school of fish agents are conducted in parallel. Each fish agent holds a regression model, while the school searches for better models through various preset behaviors. A swarm algorithm improves the accuracy and the efficiency by speeding up the convergence and preventing it from dropping into local optimums. We apply our approach on a real screening dataset and a series of simulation scenarios. Compared to three existing LR-based approaches, our approach outperforms them by having lower type I and type II error rates, being able to identify more preset causal sites, and performing at faster speeds. PMID:23984382

  13. Models, validation, and applied geochemistry: Issues in science, communication, and philosophy

    USGS Publications Warehouse

    Nordstrom, D. Kirk

    2012-01-01

    Models have become so fashionable that many scientists and engineers cannot imagine working without them. The predominant use of computer codes to execute model calculations has blurred the distinction between code and model. The recent controversy regarding model validation has brought into question what we mean by a ‘model’ and by ‘validation.’ It has become apparent that the usual meaning of validation may be common in engineering practice and seems useful in legal practice but it is contrary to scientific practice and brings into question our understanding of science and how it can best be applied to such problems as hazardous waste characterization, remediation, and aqueous geochemistry in general. This review summarizes arguments against using the phrase model validation and examines efforts to validate models for high-level radioactive waste management and for permitting and monitoring open-pit mines. Part of the controversy comes from a misunderstanding of ‘prediction’ and the need to distinguish logical from temporal prediction. Another problem stems from the difference in the engineering approach contrasted with the scientific approach. The reductionist influence on the way we approach environmental investigations also limits our ability to model the interconnected nature of reality. Guidelines are proposed to improve our perceptions and proper utilization of models. Use of the word ‘validation’ is strongly discouraged when discussing model reliability.

  14. A Mathematical Framework for the Complex System Approach to Group Dynamics: The Case of Recovery House Social Integration.

    PubMed

    Light, John M; Jason, Leonard A; Stevens, Edward B; Callahan, Sarah; Stone, Ariel

    2016-03-01

    The complex system conception of group social dynamics often involves not only changing individual characteristics, but also changing within-group relationships. Recent advances in stochastic dynamic network modeling allow these interdependencies to be modeled from data. This methodology is discussed within a context of other mathematical and statistical approaches that have been or could be applied to study the temporal evolution of relationships and behaviors within small- to medium-sized groups. An example model is presented, based on a pilot study of five Oxford House recovery homes, sober living environments for individuals following release from acute substance abuse treatment. This model demonstrates how dynamic network modeling can be applied to such systems, examines and discusses several options for pooling, and shows how results are interpreted in line with complex system concepts. Results suggest that this approach (a) is a credible modeling framework for studying group dynamics even with limited data, (b) improves upon the most common alternatives, and (c) is especially well-suited to complex system conceptions. Continuing improvements in stochastic models and associated software may finally lead to mainstream use of these techniques for the study of group dynamics, a shift already occurring in related fields of behavioral science.

  15. A framework for scalable parameter estimation of gene circuit models using structural information.

    PubMed

    Kuwahara, Hiroyuki; Fan, Ming; Wang, Suojin; Gao, Xin

    2013-07-01

    Systematic and scalable parameter estimation is a key to construct complex gene regulatory models and to ultimately facilitate an integrative systems biology approach to quantitatively understand the molecular mechanisms underpinning gene regulation. Here, we report a novel framework for efficient and scalable parameter estimation that focuses specifically on modeling of gene circuits. Exploiting the structure commonly found in gene circuit models, this framework decomposes a system of coupled rate equations into individual ones and efficiently integrates them separately to reconstruct the mean time evolution of the gene products. The accuracy of the parameter estimates is refined by iteratively increasing the accuracy of numerical integration using the model structure. As a case study, we applied our framework to four gene circuit models with complex dynamics based on three synthetic datasets and one time series microarray data set. We compared our framework to three state-of-the-art parameter estimation methods and found that our approach consistently generated higher quality parameter solutions efficiently. Although many general-purpose parameter estimation methods have been applied for modeling of gene circuits, our results suggest that the use of more tailored approaches to use domain-specific information may be a key to reverse engineering of complex biological systems. http://sfb.kaust.edu.sa/Pages/Software.aspx. Supplementary data are available at Bioinformatics online.

  16. Estimating demographic parameters using a combination of known-fate and open N-mixture models

    USGS Publications Warehouse

    Schmidt, Joshua H.; Johnson, Devin S.; Lindberg, Mark S.; Adams, Layne G.

    2015-01-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark–resight data sets. We provide implementations in both the BUGS language and an R package.

  17. Compressed Sensing for Metrics Development

    NASA Astrophysics Data System (ADS)

    McGraw, R. L.; Giangrande, S. E.; Liu, Y.

    2012-12-01

    Models by their very nature tend to be sparse in the sense that they are designed, with a few optimally selected key parameters, to provide simple yet faithful representations of a complex observational dataset or computer simulation output. This paper seeks to apply methods from compressed sensing (CS), a new area of applied mathematics currently undergoing a very rapid development (see for example Candes et al., 2006), to FASTER needs for new approaches to model evaluation and metrics development. The CS approach will be illustrated for a time series generated using a few-parameter (i.e. sparse) model. A seemingly incomplete set of measurements, taken at a just few random sampling times, is then used to recover the hidden model parameters. Remarkably there is a sharp transition in the number of required measurements, beyond which both the model parameters and time series are recovered exactly. Applications to data compression, data sampling/collection strategies, and to the development of metrics for model evaluation by comparison with observation (e.g. evaluation of model predictions of cloud fraction using cloud radar observations) are presented and discussed in context of the CS approach. Cited reference: Candes, E. J., Romberg, J., and Tao, T. (2006), Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52, 489-509.

  18. Estimating demographic parameters using a combination of known-fate and open N-mixture models.

    PubMed

    Schmidt, Joshua H; Johnson, Devin S; Lindberg, Mark S; Adams, Layne G

    2015-10-01

    Accurate estimates of demographic parameters are required to infer appropriate ecological relationships and inform management actions. Known-fate data from marked individuals are commonly used to estimate survival rates, whereas N-mixture models use count data from unmarked individuals to estimate multiple demographic parameters. However, a joint approach combining the strengths of both analytical tools has not been developed. Here we develop an integrated model combining known-fate and open N-mixture models, allowing the estimation of detection probability, recruitment, and the joint estimation of survival. We demonstrate our approach through both simulations and an applied example using four years of known-fate and pack count data for wolves (Canis lupus). Simulation results indicated that the integrated model reliably recovered parameters with no evidence of bias, and survival estimates were more precise under the joint model. Results from the applied example indicated that the marked sample of wolves was biased toward individuals with higher apparent survival rates than the unmarked pack mates, suggesting that joint estimates may be more representative of the overall population. Our integrated model is a practical approach for reducing bias while increasing precision and the amount of information gained from mark-resight data sets. We provide implementations in both the BUGS language and an R package.

  19. An Approach to Biased Item Identification Using Latent Trait Measurement Theory.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    Because it is a true score model employing item parameters which are independent of the examined sample, item characteristic curve theory (ICC) offers several advantages over classical measurement theory. In this paper an approach to biased item identification using ICC theory is described and applied. The ICC theory approach is attractive in that…

  20. Order reduction for a model of marine bacteriophage evolution

    NASA Astrophysics Data System (ADS)

    Pagliarini, Silvia; Korobeinikov, Andrei

    2017-02-01

    A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.

  1. A Bayesian Hierarchical Modeling Approach to Predicting Flow in Ungauged Basins

    NASA Astrophysics Data System (ADS)

    Gronewold, A.; Alameddine, I.; Anderson, R. M.

    2009-12-01

    Recent innovative approaches to identifying and applying regression-based relationships between land use patterns (such as increasing impervious surface area and decreasing vegetative cover) and rainfall-runoff model parameters represent novel and promising improvements to predicting flow from ungauged basins. In particular, these approaches allow for predicting flows under uncertain and potentially variable future conditions due to rapid land cover changes, variable climate conditions, and other factors. Despite the broad range of literature on estimating rainfall-runoff model parameters, however, the absence of a robust set of modeling tools for identifying and quantifying uncertainties in (and correlation between) rainfall-runoff model parameters represents a significant gap in current hydrological modeling research. Here, we build upon a series of recent publications promoting novel Bayesian and probabilistic modeling strategies for quantifying rainfall-runoff model parameter estimation uncertainty. Our approach applies alternative measures of rainfall-runoff model parameter joint likelihood (including Nash-Sutcliffe efficiency, among others) to simulate samples from the joint parameter posterior probability density function. We then use these correlated samples as response variables in a Bayesian hierarchical model with land use coverage data as predictor variables in order to develop a robust land use-based tool for forecasting flow in ungauged basins while accounting for, and explicitly acknowledging, parameter estimation uncertainty. We apply this modeling strategy to low-relief coastal watersheds of Eastern North Carolina, an area representative of coastal resource waters throughout the world because of its sensitive embayments and because of the abundant (but currently threatened) natural resources it hosts. Consequently, this area is the subject of several ongoing studies and large-scale planning initiatives, including those conducted through the United States Environmental Protection Agency (USEPA) total maximum daily load (TMDL) program, as well as those addressing coastal population dynamics and sea level rise. Our approach has several advantages, including the propagation of parameter uncertainty through a nonparametric probability distribution which avoids common pitfalls of fitting parameters and model error structure to a predetermined parametric distribution function. In addition, by explicitly acknowledging correlation between model parameters (and reflecting those correlations in our predictive model) our model yields relatively efficient prediction intervals (unlike those in the current literature which are often unnecessarily large, and may lead to overly-conservative management actions). Finally, our model helps improve understanding of the rainfall-runoff process by identifying model parameters (and associated catchment attributes) which are most sensitive to current and future land use change patterns. Disclaimer: Although this work was reviewed by EPA and approved for publication, it may not necessarily reflect official Agency policy.

  2. Statistical Methods for Proteomic Biomarker Discovery based on Feature Extraction or Functional Modeling Approaches.

    PubMed

    Morris, Jeffrey S

    2012-01-01

    In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.

  3. On a turbulent wall model to predict hemolysis numerically in medical devices

    NASA Astrophysics Data System (ADS)

    Lee, Seunghun; Chang, Minwook; Kang, Seongwon; Hur, Nahmkeon; Kim, Wonjung

    2017-11-01

    Analyzing degradation of red blood cells is very important for medical devices with blood flows. The blood shear stress has been recognized as the most dominant factor for hemolysis in medical devices. Compared to laminar flows, turbulent flows have higher shear stress values in the regions near the wall. In case of predicting hemolysis numerically, this phenomenon can require a very fine mesh and large computational resources. In order to resolve this issue, the purpose of this study is to develop a turbulent wall model to predict the hemolysis more efficiently. In order to decrease the numerical error of hemolysis prediction in a coarse grid resolution, we divided the computational domain into two regions and applied different approaches to each region. In the near-wall region with a steep velocity gradient, an analytic approach using modeled velocity profile is applied to reduce a numerical error to allow a coarse grid resolution. We adopt the Van Driest law as a model for the mean velocity profile. In a region far from the wall, a regular numerical discretization is applied. The proposed turbulent wall model is evaluated for a few turbulent flows inside a cannula and centrifugal pumps. The results present that the proposed turbulent wall model for hemolysis improves the computational efficiency significantly for engineering applications. Corresponding author.

  4. Bayesian GGE biplot models applied to maize multi-environments trials.

    PubMed

    de Oliveira, L A; da Silva, C P; Nuvunga, J J; da Silva, A Q; Balestre, M

    2016-06-17

    The additive main effects and multiplicative interaction (AMMI) and the genotype main effects and genotype x environment interaction (GGE) models stand out among the linear-bilinear models used in genotype x environment interaction studies. Despite the advantages of their use to describe genotype x environment (AMMI) or genotype and genotype x environment (GGE) interactions, these methods have known limitations that are inherent to fixed effects models, including difficulty in treating variance heterogeneity and missing data. Traditional biplots include no measure of uncertainty regarding the principal components. The present study aimed to apply the Bayesian approach to GGE biplot models and assess the implications for selecting stable and adapted genotypes. Our results demonstrated that the Bayesian approach applied to GGE models with non-informative priors was consistent with the traditional GGE biplot analysis, although the credible region incorporated into the biplot enabled distinguishing, based on probability, the performance of genotypes, and their relationships with the environments in the biplot. Those regions also enabled the identification of groups of genotypes and environments with similar effects in terms of adaptability and stability. The relative position of genotypes and environments in biplots is highly affected by the experimental accuracy. Thus, incorporation of uncertainty in biplots is a key tool for breeders to make decisions regarding stability selection and adaptability and the definition of mega-environments.

  5. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  6. 0-6754 : review of tolling approaches for implementation within TxDOT's travel demand models : [project summary].

    DOT National Transportation Integrated Search

    2013-11-01

    The urban travel demand models developed and applied by the Transportation Planning and Programming Division of the Texas Department of Transportation (TxDOT-TPP) are daily three-step models without feedback. In other words, trip generation, trip dis...

  7. Development of Gridded Fields of Urban Canopy Parameters for Advanced Urban Meteorological and Air Quality Models

    EPA Science Inventory

    Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...

  8. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Interaction

    NASA Technical Reports Server (NTRS)

    DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  9. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    NASA Technical Reports Server (NTRS)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  10. Application of Artificial Intelligence for Bridge Deterioration Model.

    PubMed

    Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.

  11. Application of Artificial Intelligence for Bridge Deterioration Model

    PubMed Central

    Chen, Zhang; Wu, Yangyang; Sun, Lijun

    2015-01-01

    The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121

  12. Bayesian Total-Evidence Dating Reveals the Recent Crown Radiation of Penguins

    PubMed Central

    Heath, Tracy A.; Ksepka, Daniel T.; Stadler, Tanja; Welch, David; Drummond, Alexei J.

    2017-01-01

    The total-evidence approach to divergence time dating uses molecular and morphological data from extant and fossil species to infer phylogenetic relationships, species divergence times, and macroevolutionary parameters in a single coherent framework. Current model-based implementations of this approach lack an appropriate model for the tree describing the diversification and fossilization process and can produce estimates that lead to erroneous conclusions. We address this shortcoming by providing a total-evidence method implemented in a Bayesian framework. This approach uses a mechanistic tree prior to describe the underlying diversification process that generated the tree of extant and fossil taxa. Previous attempts to apply the total-evidence approach have used tree priors that do not account for the possibility that fossil samples may be direct ancestors of other samples, that is, ancestors of fossil or extant species or of clades. The fossilized birth–death (FBD) process explicitly models the diversification, fossilization, and sampling processes and naturally allows for sampled ancestors. This model was recently applied to estimate divergence times based on molecular data and fossil occurrence dates. We incorporate the FBD model and a model of morphological trait evolution into a Bayesian total-evidence approach to dating species phylogenies. We apply this method to extant and fossil penguins and show that the modern penguins radiated much more recently than has been previously estimated, with the basal divergence in the crown clade occurring at \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}${\\sim}12.7$\\end{document} Ma and most splits leading to extant species occurring in the last 2 myr. Our results demonstrate that including stem-fossil diversity can greatly improve the estimates of the divergence times of crown taxa. The method is available in BEAST2 (version 2.4) software www.beast2.org with packages SA (version at least 1.1.4) and morph-models (version at least 1.0.4) installed. [Birth–death process; calibration; divergence times; MCMC; phylogenetics.] PMID:28173531

  13. Applying the age-shift approach to model responses to midrotation fertilization

    Treesearch

    Colleen A. Carlson; Thomas R. Fox; H. Lee Allen; Timothy J. Albaugh

    2010-01-01

    Growth and yield models used to evaluate midrotation fertilization economics require adjustments to account for the typically observed responses. This study investigated the use of age-shift models to predict midrotation fertilizer responses. Age-shift prediction models were constructed from a regional study consisting of 43 installations of a nitrogen (N) by...

  14. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  15. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  16. The Saale-Project -A multidisciplinary approach towards sustainable integrative catchment management -

    NASA Astrophysics Data System (ADS)

    Bongartz, K.; Flügel, W. A.

    2003-04-01

    In the joint research project “Development of an integrated methodology for the sustainable management of river basins The Saale River Basin example”, coordinated by the Centre of Environmental Research (UFZ), concepts and tools for an integrated management of large river basins are developed and applied for the Saale river basin. The ultimate objective of the project is to contribute to the holistic assessment and benchmarking approaches in water resource planning, as required by the European Water Framework Directive. The study presented here deals (1) with the development of a river basin information and modelling system, (2) with the refinement of a regionalisation approach adapted for integrated basin modelling. The approach combines a user friendly basin disaggregation method preserving the catchment’s physiographic heterogeneity with a process oriented hydrological basin assessment for scale bridging integrated modelling. The well tested regional distribution concept of Response Units (RUs) will be enhanced by landscape metrics and decision support tools for objective, scale independent and problem oriented RU delineation to provide the spatial modelling entities for process oriented and distributed simulation of vertical and lateral hydrological transport processes. On basis of this RUs suitable hydrological modelling approaches will be further developed with strong respect to a more detailed simulation of the lateral surface and subsurface flows as well as the channel flow. This methodical enhancement of the well recognised RU-concept will be applied to the river basin of the Saale (Ac: 23 179 km2) and validated by a nested catchment approach, which allows multi-response-validation and estimation of uncertainties of the modelling results. Integrated modelling of such a complex basin strongly influenced by manifold human activities (reservoirs, agriculture, urban areas and industry) can only be achieved by coupling the various modelling approaches within a well defined model framework system. The latter is interactively linked with a sophisticated geo-relational database (DB) serving all research teams involved in the project. This interactive linkage is a core element comprising an object-oriented, internet-based modelling framework system (MFS) for building interdisciplinary modelling applications and offering different analysis and visualisation tools.

  17. There Is No Business Model for Open Educational Resources: A Business Model Approach

    ERIC Educational Resources Information Center

    de Langen, Frank

    2011-01-01

    The economic proverb "There is no such thing such as a free lunch" applies also to open educational resources (OER). In recent years, several authors have used revenue models and business models to analyse the different sources of possible funding for OER. In this article the business models of Osterwalder and Chesbrough are combined…

  18. Hybrid grammar-based approach to nonlinear dynamical system identification from biological time series

    NASA Astrophysics Data System (ADS)

    McKinney, B. A.; Crowe, J. E., Jr.; Voss, H. U.; Crooke, P. S.; Barney, N.; Moore, J. H.

    2006-02-01

    We introduce a grammar-based hybrid approach to reverse engineering nonlinear ordinary differential equation models from observed time series. This hybrid approach combines a genetic algorithm to search the space of model architectures with a Kalman filter to estimate the model parameters. Domain-specific knowledge is used in a context-free grammar to restrict the search space for the functional form of the target model. We find that the hybrid approach outperforms a pure evolutionary algorithm method, and we observe features in the evolution of the dynamical models that correspond with the emergence of favorable model components. We apply the hybrid method to both artificially generated time series and experimentally observed protein levels from subjects who received the smallpox vaccine. From the observed data, we infer a cytokine protein interaction network for an individual’s response to the smallpox vaccine.

  19. Improving Distributed Diagnosis Through Structural Model Decomposition

    NASA Technical Reports Server (NTRS)

    Bregon, Anibal; Daigle, Matthew John; Roychoudhury, Indranil; Biswas, Gautam; Koutsoukos, Xenofon; Pulido, Belarmino

    2011-01-01

    Complex engineering systems require efficient fault diagnosis methodologies, but centralized approaches do not scale well, and this motivates the development of distributed solutions. This work presents an event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, by using the structural model decomposition capabilities provided by Possible Conflicts. We develop a distributed diagnosis algorithm that uses residuals computed by extending Possible Conflicts to build local event-based diagnosers based on global diagnosability analysis. The proposed approach is applied to a multitank system, and results demonstrate an improvement in the design of local diagnosers. Since local diagnosers use only a subset of the residuals, and use subsystem models to compute residuals (instead of the global system model), the local diagnosers are more efficient than previously developed distributed approaches.

  20. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.

    PubMed

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-12-04

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.

  1. Intergration of system identification and robust controller designs for flexible structures in space

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Lew, Jiann-Shiun

    1990-01-01

    An approach is developed using experimental data to identify a reduced-order model and its model error for a robust controller design. There are three steps involved in the approach. First, an approximately balanced model is identified using the Eigensystem Realization Algorithm, which is an identification algorithm. Second, the model error is calculated and described in frequency domain in terms of the H(infinity) norm. Third, a pole placement technique in combination with a H(infinity) control method is applied to design a controller for the considered system. A set experimental data from an existing setup, namely the Mini-Mast system, is used to illustrate and verify the approach.

  2. Monitoring Distributed Systems: A Relational Approach.

    DTIC Science & Technology

    1982-12-01

    relationship, and time. The first two have been are modeled directly in the relational model. The third is perhaps the most fundamental , for without the system ...of another, newly created file. The approach adopted here applies to object-based operatin systems , and will support capability addressing at the...in certainties. -- Francis Bacon, in The Advancement of Learning The thesis of this research is that monitoring distributed systems is fundamentally a

  3. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    PubMed Central

    Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results. PMID:28467468

  4. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    PubMed

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  5. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  6. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  7. A semiparametric graphical modelling approach for large-scale equity selection.

    PubMed

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  8. Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Sun, Dihua; Liu, Weining

    2016-11-01

    Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.

  9. Model-Based Analysis of Biopharmaceutic Experiments To Improve Mechanistic Oral Absorption Modeling: An Integrated in Vitro in Vivo Extrapolation Perspective Using Ketoconazole as a Model Drug.

    PubMed

    Pathak, Shriram M; Ruff, Aaron; Kostewicz, Edmund S; Patel, Nikunjkumar; Turner, David B; Jamei, Masoud

    2017-12-04

    Mechanistic modeling of in vitro data generated from metabolic enzyme systems (viz., liver microsomes, hepatocytes, rCYP enzymes, etc.) facilitates in vitro-in vivo extrapolation (IVIV_E) of metabolic clearance which plays a key role in the successful prediction of clearance in vivo within physiologically-based pharmacokinetic (PBPK) modeling. A similar concept can be applied to solubility and dissolution experiments whereby mechanistic modeling can be used to estimate intrinsic parameters required for mechanistic oral absorption simulation in vivo. However, this approach has not widely been applied within an integrated workflow. We present a stepwise modeling approach where relevant biopharmaceutics parameters for ketoconazole (KTZ) are determined and/or confirmed from the modeling of in vitro experiments before being directly used within a PBPK model. Modeling was applied to various in vitro experiments, namely: (a) aqueous solubility profiles to determine intrinsic solubility, salt limiting solubility factors and to verify pK a ; (b) biorelevant solubility measurements to estimate bile-micelle partition coefficients; (c) fasted state simulated gastric fluid (FaSSGF) dissolution for formulation disintegration profiling; and (d) transfer experiments to estimate supersaturation and precipitation parameters. These parameters were then used within a PBPK model to predict the dissolved and total (i.e., including the precipitated fraction) concentrations of KTZ in the duodenum of a virtual population and compared against observed clinical data. The developed model well characterized the intraluminal dissolution, supersaturation, and precipitation behavior of KTZ. The mean simulated AUC 0-t of the total and dissolved concentrations of KTZ were comparable to (within 2-fold of) the corresponding observed profile. Moreover, the developed PBPK model of KTZ successfully described the impact of supersaturation and precipitation on the systemic plasma concentration profiles of KTZ for 200, 300, and 400 mg doses. These results demonstrate that IVIV_E applied to biopharmaceutical experiments can be used to understand and build confidence in the quality of the input parameters and mechanistic models used for mechanistic oral absorption simulations in vivo, thereby improving the prediction performance of PBPK models. Moreover, this approach can inform the selection and design of in vitro experiments, potentially eliminating redundant experiments and thus helping to reduce the cost and time of drug product development.

  10. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform

    PubMed Central

    Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features. PMID:27304979

  11. Modeling the Pulse Signal by Wave-Shape Function and Analyzing by Synchrosqueezing Transform.

    PubMed

    Wu, Hau-Tieng; Wu, Han-Kuei; Wang, Chun-Li; Yang, Yueh-Lung; Wu, Wen-Hsiang; Tsai, Tung-Hu; Chang, Hen-Hong

    2016-01-01

    We apply the recently developed adaptive non-harmonic model based on the wave-shape function, as well as the time-frequency analysis tool called synchrosqueezing transform (SST) to model and analyze oscillatory physiological signals. To demonstrate how the model and algorithm work, we apply them to study the pulse wave signal. By extracting features called the spectral pulse signature, and based on functional regression, we characterize the hemodynamics from the radial pulse wave signals recorded by the sphygmomanometer. Analysis results suggest the potential of the proposed signal processing approach to extract health-related hemodynamics features.

  12. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  13. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends.

    PubMed

    Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J

    2012-11-07

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.

  14. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  15. Vulnerable Populations in Hospital and Health Care Emergency Preparedness Planning: A Comprehensive Framework for Inclusion.

    PubMed

    Kreisberg, Debra; Thomas, Deborah S K; Valley, Morgan; Newell, Shannon; Janes, Enessa; Little, Charles

    2016-04-01

    As attention to emergency preparedness becomes a critical element of health care facility operations planning, efforts to recognize and integrate the needs of vulnerable populations in a comprehensive manner have lagged. This not only results in decreased levels of equitable service, but also affects the functioning of the health care system in disasters. While this report emphasizes the United States context, the concepts and approaches apply beyond this setting. This report: (1) describes a conceptual framework that provides a model for the inclusion of vulnerable populations into integrated health care and public health preparedness; and (2) applies this model to a pilot study. The framework is derived from literature, hospital regulatory policy, and health care standards, laying out the communication and relational interfaces that must occur at the systems, organizational, and community levels for a successful multi-level health care systems response that is inclusive of diverse populations explicitly. The pilot study illustrates the application of key elements of the framework, using a four-pronged approach that incorporates both quantitative and qualitative methods for deriving information that can inform hospital and health facility preparedness planning. The conceptual framework and model, applied to a pilot project, guide expanded work that ultimately can result in methodologically robust approaches to comprehensively incorporating vulnerable populations into the fabric of hospital disaster preparedness at levels from local to national, thus supporting best practices for a community resilience approach to disaster preparedness.

  16. Multiple flood vulnerability assessment approach based on fuzzy comprehensive evaluation method and coordinated development degree model.

    PubMed

    Yang, Weichao; Xu, Kui; Lian, Jijian; Bin, Lingling; Ma, Chao

    2018-05-01

    Flood is a serious challenge that increasingly affects the residents as well as policymakers. Flood vulnerability assessment is becoming gradually relevant in the world. The purpose of this study is to develop an approach to reveal the relationship between exposure, sensitivity and adaptive capacity for better flood vulnerability assessment, based on the fuzzy comprehensive evaluation method (FCEM) and coordinated development degree model (CDDM). The approach is organized into three parts: establishment of index system, assessment of exposure, sensitivity and adaptive capacity, and multiple flood vulnerability assessment. Hydrodynamic model and statistical data are employed for the establishment of index system; FCEM is used to evaluate exposure, sensitivity and adaptive capacity; and CDDM is applied to express the relationship of the three components of vulnerability. Six multiple flood vulnerability types and four levels are proposed to assess flood vulnerability from multiple perspectives. Then the approach is applied to assess the spatiality of flood vulnerability in Hainan's eastern area, China. Based on the results of multiple flood vulnerability, a decision-making process for rational allocation of limited resources is proposed and applied to the study area. The study shows that multiple flood vulnerability assessment can evaluate vulnerability more completely, and help decision makers learn more information about making decisions in a more comprehensive way. In summary, this study provides a new way for flood vulnerability assessment and disaster prevention decision. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Estimating impacts of plantation forestry on plant biodiversity in southern Chile-a spatially explicit modelling approach.

    PubMed

    Braun, Andreas Christian; Koch, Barbara

    2016-10-01

    Monitoring the impacts of land-use practices is of particular importance with regard to biodiversity hotspots in developing countries. Here, conserving the high level of unique biodiversity is challenged by limited possibilities for data collection on site. Especially for such scenarios, assisting biodiversity assessments by remote sensing has proven useful. Remote sensing techniques can be applied to interpolate between biodiversity assessments taken in situ. Through this approach, estimates of biodiversity for entire landscapes can be produced, relating land-use intensity to biodiversity conditions. Such maps are a valuable basis for developing biodiversity conservation plans. Several approaches have been published so far to interpolate local biodiversity assessments in remote sensing data. In the following, a new approach is proposed. Instead of inferring biodiversity using environmental variables or the variability of spectral values, a hypothesis-based approach is applied. Empirical knowledge about biodiversity in relation to land-use is formalized and applied as ascription rules for image data. The method is exemplified for a large study site (over 67,000 km(2)) in central Chile, where forest industry heavily impacts plant diversity. The proposed approach yields a coefficient of correlation of 0.73 and produces a convincing estimate of regional biodiversity. The framework is broad enough to be applied to other study sites.

  18. What could they have been thinking? How sociotechnical system design influences cognition: a case study of the Stockwell shooting.

    PubMed

    Jenkins, Daniel P; Salmon, Paul M; Stanton, Neville A; Walker, Guy H; Rafferty, Laura

    2011-02-01

    Understanding why an individual acted in a certain way is of fundamental importance to the human factors community, especially when the choice of action results in an undesirable outcome. This challenge is typically tackled by applying retrospective interview techniques to generate models of what happened, recording deviations from a 'correct procedure'. While such approaches may have great utility in tightly constrained procedural environments, they are less applicable in complex sociotechnical systems that require individuals to modify procedures in real time to respond to a changing environment. For complex sociotechnical systems, a formative approach is required that maps the information available to the individual and considers its impact on performance and action. A context-specific, activity-independent, constraint-based model forms the basis of this approach. To illustrate, an example of the Stockwell shooting is used, where an innocent man, mistaken for a suicide bomber, was shot dead. Transferable findings are then presented. STATEMENT OF RELEVANCE: This paper presents a new approach that can be applied proactively to consider how sociotechnical system design, and the information available to an individual, can affect their performance. The approach is proposed to be complementary to the existing tools in the mental models phase of the cognitive work analysis framework.

  19. Simulation Assisted Risk Assessment Applied to Launch Vehicle Conceptual Design

    NASA Technical Reports Server (NTRS)

    Mathias, Donovan L.; Go, Susie; Gee, Ken; Lawrence, Scott

    2008-01-01

    A simulation-based risk assessment approach is presented and is applied to the analysis of abort during the ascent phase of a space exploration mission. The approach utilizes groupings of launch vehicle failures, referred to as failure bins, which are mapped to corresponding failure environments. Physical models are used to characterize the failure environments in terms of the risk due to blast overpressure, resulting debris field, and the thermal radiation due to a fireball. The resulting risk to the crew is dynamically modeled by combining the likelihood of each failure, the severity of the failure environments as a function of initiator and time of the failure, the robustness of the crew module, and the warning time available due to early detection. The approach is shown to support the launch vehicle design process by characterizing the risk drivers and identifying regions where failure detection would significantly reduce the risk to the crew.

  20. Applying the Tuple Space-Based Approach to the Simulation of the Caspases, an Essential Signalling Pathway.

    PubMed

    Cárdenas-García, Maura; González-Pérez, Pedro Pablo

    2013-03-01

    Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spacesbased in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.

  1. Applying the tuple space-based approach to the simulation of the caspases, an essential signalling pathway.

    PubMed

    Cárdenas-García, Maura; González-Pérez, Pedro Pablo

    2013-04-11

    Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spaces-based in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.

  2. Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification

    NASA Astrophysics Data System (ADS)

    Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona

    2016-08-01

    The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.

  3. Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models

    NASA Astrophysics Data System (ADS)

    Ardani, S.; Kaihatu, J. M.

    2012-12-01

    Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC

  4. Multilevel SEM Strategies for Evaluating Mediation in Three-Level Data

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.

    2011-01-01

    Strategies for modeling mediation effects in multilevel data have proliferated over the past decade, keeping pace with the demands of applied research. Approaches for testing mediation hypotheses with 2-level clustered data were first proposed using multilevel modeling (MLM) and subsequently using multilevel structural equation modeling (MSEM) to…

  5. ESTIMATION OF EMISSION ADJUSTMENTS FROM THE APPLICATION OF FOUR-DIMENSIONAL DATA ASSIMILATION TO PHOTOCHEMICAL AIR QUALITY MODELING. (R826372)

    EPA Science Inventory

    Four-dimensional data assimilation applied to photochemical air quality modeling is used to suggest adjustments to the emissions inventory of the Atlanta, Georgia metropolitan area. In this approach, a three-dimensional air quality model, coupled with direct sensitivity analys...

  6. Multilayered Word Structure Model for Assessing Spelling of Finnish Children in Shallow Orthography

    ERIC Educational Resources Information Center

    Kulju, Pirjo; Mäkinen, Marita

    2017-01-01

    This study explores Finnish children's word-level spelling by applying a linguistically based multilayered word structure model for assessing spelling performance. The model contributes to the analytical qualitative assessment approach in order to identify children's spelling performance for enhancing writing skills. The children (N = 105)…

  7. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  8. Merging Digital Surface Models Implementing Bayesian Approaches

    NASA Astrophysics Data System (ADS)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  9. Pathogen Treatment Guidance and Monitoring Approaches fro On-Site Non-Potable Water Reuse

    EPA Science Inventory

    On-site non-potable water reuse is increasingly used to augment water supplies, but traditional fecal indicator approaches for defining and monitoring exposure risks are limited when applied to these decentralized options. This session emphasizes risk-based modeling to define pat...

  10. Explorations in Social Work Education

    ERIC Educational Resources Information Center

    Tie'er, Shi

    2013-01-01

    Social work education leans toward the applied approach emphasizing the practical and experiential. At present, many schools still offer social work education in the traditional academic model emphasizing textual learning. This approach is not suitable to the knowledge, student or teacher orientation in social work, and its pedagogy. To develop…

  11. ECOSYSTEM SERVICES AND BEYOND: INTEGRATION OF ECOSYSTEM SCIENCE AND MULTIMEDIA EXPOSURE MODELING FOR ENVIRONMENTAL PROTECTION

    EPA Science Inventory

    Decision-making for ecosystem protection and resource management requires an integrative science and technology applied with a sufficiently comprehensive systems approach. Single media (e.g., air, soil and water) approaches that evaluate aspects of an ecosystem in a stressor-by-...

  12. A Worksheet for Ethics Instruction and Exercises in Reason.

    ERIC Educational Resources Information Center

    Bivins, Thomas H.

    1993-01-01

    Argues that teaching applied mass media ethics requires two vital components: a grounding in the relevant ethical theories, and a structured approach to analyzing the issues in case-study format. Presents a worksheet model that provides such an approach over a wide range of issues. (SR)

  13. Impacts of Learning Orientation on Product Innovation Performance

    ERIC Educational Resources Information Center

    Calisir, Fethi; Gumussoy, Cigdem Altin; Guzelsoy, Ezgi

    2013-01-01

    Purpose: The present study aims to examine the effect of learning orientation (commitment to learning, shared vision, open-mindedness) on the product innovation performance (product innovation efficacy and efficiency) of companies in Turkey. Design/methodology/approach: A structural equation-modeling approach was applied to identify the variables…

  14. Random forests, a novel approach for discrimination of fish populations using parasites as biological tags.

    PubMed

    Perdiguero-Alonso, Diana; Montero, Francisco E; Kostadinova, Aneta; Raga, Juan Antonio; Barrett, John

    2008-10-01

    Due to the complexity of host-parasite relationships, discrimination between fish populations using parasites as biological tags is difficult. This study introduces, to our knowledge for the first time, random forests (RF) as a new modelling technique in the application of parasite community data as biological markers for population assignment of fish. This novel approach is applied to a dataset with a complex structure comprising 763 parasite infracommunities in population samples of Atlantic cod, Gadus morhua, from the spawning/feeding areas in five regions in the North East Atlantic (Baltic, Celtic, Irish and North seas and Icelandic waters). The learning behaviour of RF is evaluated in comparison with two other algorithms applied to class assignment problems, the linear discriminant function analysis (LDA) and artificial neural networks (ANN). The three algorithms are used to develop predictive models applying three cross-validation procedures in a series of experiments (252 models in total). The comparative approach to RF, LDA and ANN algorithms applied to the same datasets demonstrates the competitive potential of RF for developing predictive models since RF exhibited better accuracy of prediction and outperformed LDA and ANN in the assignment of fish to their regions of sampling using parasite community data. The comparative analyses and the validation experiment with a 'blind' sample confirmed that RF models performed more effectively with a large and diverse training set and a large number of variables. The discrimination results obtained for a migratory fish species with largely overlapping parasite communities reflects the high potential of RF for developing predictive models using data that are both complex and noisy, and indicates that it is a promising tool for parasite tag studies. Our results suggest that parasite community data can be used successfully to discriminate individual cod from the five different regions of the North East Atlantic studied using RF.

  15. Surveillance theory applied to virus detection: a case for targeted discovery

    USGS Publications Warehouse

    Bogich, Tiffany L.; Anthony, Simon J.; Nichols, James D.

    2013-01-01

    Virus detection and mathematical modeling have gone through rapid developments in the past decade. Both offer new insights into the epidemiology of infectious disease and characterization of future risk; however, modeling has not yet been applied to designing the best surveillance strategies for viral and pathogen discovery. We review recent developments and propose methods to integrate viral and pathogen discovery and mathematical modeling through optimal surveillance theory, arguing for a more targeted approach to novel virus detection guided by the principles of adaptive management and structured decision-making.

  16. Field Test of a Hybrid Finite-Difference and Analytic Element Regional Model.

    PubMed

    Abrams, D B; Haitjema, H M; Feinstein, D T; Hunt, R J

    2016-01-01

    Regional finite-difference models often have cell sizes that are too large to sufficiently model well-stream interactions. Here, a steady-state hybrid model is applied whereby the upper layer or layers of a coarse MODFLOW model are replaced by the analytic element model GFLOW, which represents surface waters and wells as line and point sinks. The two models are coupled by transferring cell-by-cell leakage obtained from the original MODFLOW model to the bottom of the GFLOW model. A real-world test of the hybrid model approach is applied on a subdomain of an existing model of the Lake Michigan Basin. The original (coarse) MODFLOW model consists of six layers, the top four of which are aggregated into GFLOW as a single layer, while the bottom two layers remain part of MODFLOW in the hybrid model. The hybrid model and a refined "benchmark" MODFLOW model simulate similar baseflows. The hybrid and benchmark models also simulate similar baseflow reductions due to nearby pumping when the well is located within the layers represented by GFLOW. However, the benchmark model requires refinement of the model grid in the local area of interest, while the hybrid approach uses a gridless top layer and is thus unaffected by grid discretization errors. The hybrid approach is well suited to facilitate cost-effective retrofitting of existing coarse grid MODFLOW models commonly used for regional studies because it leverages the strengths of both finite-difference and analytic element methods for predictions in mildly heterogeneous systems that can be simulated with steady-state conditions. © 2015, National Ground Water Association.

  17. Hierarchical mixture of experts and diagnostic modeling approach to reduce hydrologic model structural uncertainty: STRUCTURAL UNCERTAINTY DIAGNOSTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moges, Edom; Demissie, Yonas; Li, Hong-Yi

    2016-04-01

    In most water resources applications, a single model structure might be inadequate to capture the dynamic multi-scale interactions among different hydrological processes. Calibrating single models for dynamic catchments, where multiple dominant processes exist, can result in displacement of errors from structure to parameters, which in turn leads to over-correction and biased predictions. An alternative to a single model structure is to develop local expert structures that are effective in representing the dominant components of the hydrologic process and adaptively integrate them based on an indicator variable. In this study, the Hierarchical Mixture of Experts (HME) framework is applied to integratemore » expert model structures representing the different components of the hydrologic process. Various signature diagnostic analyses are used to assess the presence of multiple dominant processes and the adequacy of a single model, as well as to identify the structures of the expert models. The approaches are applied for two distinct catchments, the Guadalupe River (Texas) and the French Broad River (North Carolina) from the Model Parameter Estimation Experiment (MOPEX), using different structures of the HBV model. The results show that the HME approach has a better performance over the single model for the Guadalupe catchment, where multiple dominant processes are witnessed through diagnostic measures. Whereas, the diagnostics and aggregated performance measures prove that French Broad has a homogeneous catchment response, making the single model adequate to capture the response.« less

  18. Assessing and reporting uncertainties in dietary exposure analysis: Mapping of uncertainties in a tiered approach.

    PubMed

    Kettler, Susanne; Kennedy, Marc; McNamara, Cronan; Oberdörfer, Regina; O'Mahony, Cian; Schnabel, Jürgen; Smith, Benjamin; Sprong, Corinne; Faludi, Roland; Tennant, David

    2015-08-01

    Uncertainty analysis is an important component of dietary exposure assessments in order to understand correctly the strength and limits of its results. Often, standard screening procedures are applied in a first step which results in conservative estimates. If through those screening procedures a potential exceedance of health-based guidance values is indicated, within the tiered approach more refined models are applied. However, the sources and types of uncertainties in deterministic and probabilistic models can vary or differ. A key objective of this work has been the mapping of different sources and types of uncertainties to better understand how to best use uncertainty analysis to generate more realistic comprehension of dietary exposure. In dietary exposure assessments, uncertainties can be introduced by knowledge gaps about the exposure scenario, parameter and the model itself. With this mapping, general and model-independent uncertainties have been identified and described, as well as those which can be introduced and influenced by the specific model during the tiered approach. This analysis identifies that there are general uncertainties common to point estimates (screening or deterministic methods) and probabilistic exposure assessment methods. To provide further clarity, general sources of uncertainty affecting many dietary exposure assessments should be separated from model-specific uncertainties. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    NASA Astrophysics Data System (ADS)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg

    2015-07-01

    In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.

  20. Observations and Models of Highly Intermittent Phytoplankton Distributions

    PubMed Central

    Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu

    2014-01-01

    The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740

  1. Application and flight test of linearizing transformations using measurement feedback to the nonlinear control problem

    NASA Technical Reports Server (NTRS)

    Antoniewicz, Robert F.; Duke, Eugene L.; Menon, P. K. A.

    1991-01-01

    The design of nonlinear controllers has relied on the use of detailed aerodynamic and engine models that must be associated with the control law in the flight system implementation. Many of these controllers were applied to vehicle flight path control problems and have attempted to combine both inner- and outer-loop control functions in a single controller. An approach to the nonlinear trajectory control problem is presented. This approach uses linearizing transformations with measurement feedback to eliminate the need for detailed aircraft models in outer-loop control applications. By applying this approach and separating the inner-loop and outer-loop functions two things were achieved: (1) the need for incorporating detailed aerodynamic models in the controller is obviated; and (2) the controller is more easily incorporated into existing aircraft flight control systems. An implementation of the controller is discussed, and this controller is tested on a six degree-of-freedom F-15 simulation and in flight on an F-15 aircraft. Simulation data are presented which validates this approach over a large portion of the F-15 flight envelope. Proof of this concept is provided by flight-test data that closely matches simulation results. Flight-test data are also presented.

  2. A Volume Flux Approach to Cryolava Dome Emplacement on Europa

    NASA Technical Reports Server (NTRS)

    Quick, Lynnae C.; Fagents, Sarah A.; Hurford, Terry A.; Prockter, Louise M.

    2017-01-01

    We previously modeled a subset of domes on Europa with morphologies consistent with emplacement by viscous extrusions of cryolava. These models assumed instantaneous emplacement of a fixed volume of fluid onto the surface, followed by relaxation to form domes. However, this approach only allowed for the investigation of late-stage eruptive processes far from the vent and provided little insight into how cryolavas arrived at the surface. Consideration of dome emplacement as cryolavas erupt at the surface is therefore pertinent. A volume flux approach, in which lava erupts from the vent at a constant rate, was successfully applied to the formation of steep-sided volcanic domes on Venus. These domes are believed to have formed in the same manner as candi-date cryolava domes on Europa. In order to gain a more complete understanding of the potential for the emplacement of Europa domes via extrusive volcanism, we have applied this new volume flux approach to the formation of putative cryovolcanic domes on Europa. Assuming as in that europan cryolavas are briny, aqueous solutions which may or may not contain some ice crystal fraction, we present the results of this modeling and explore theories for the formation of low-albedo moats that surround some domes.

  3. Validation of Fatigue Modeling Predictions in Aviation Operations

    NASA Technical Reports Server (NTRS)

    Gregory, Kevin; Martinez, Siera; Flynn-Evans, Erin

    2017-01-01

    Bio-mathematical fatigue models that predict levels of alertness and performance are one potential tool for use within integrated fatigue risk management approaches. A number of models have been developed that provide predictions based on acute and chronic sleep loss, circadian desynchronization, and sleep inertia. Some are publicly available and gaining traction in settings such as commercial aviation as a means of evaluating flight crew schedules for potential fatigue-related risks. Yet, most models have not been rigorously evaluated and independently validated for the operations to which they are being applied and many users are not fully aware of the limitations in which model results should be interpreted and applied.

  4. Consistency of the free-volume approach to the homogeneous deformation of metallic glasses

    NASA Astrophysics Data System (ADS)

    Blétry, Marc; Thai, Minh Thanh; Champion, Yannick; Perrière, Loïc; Ochin, Patrick

    2014-05-01

    One of the most widely used approaches to model metallic-glasses high-temperature homogeneous deformation is the free-volume theory, developed by Cohen and Turnbull and extended by Spaepen. A simple elastoviscoplastic formulation has been proposed that allows one to determine various parameters of such a model. This approach is applied here to the results obtained by de Hey et al. on a Pd-based metallic glass. In their study, de Hey et al. were able to determine some of the parameters used in the elastoviscoplastic formulation through DSC modeling coupled with mechanical tests, and the consistency of the two viewpoints was assessed.

  5. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  6. A predictive modeling approach to increasing the economic effectiveness of disease management programs.

    PubMed

    Bayerstadler, Andreas; Benstetter, Franz; Heumann, Christian; Winter, Fabian

    2014-09-01

    Predictive Modeling (PM) techniques are gaining importance in the worldwide health insurance business. Modern PM methods are used for customer relationship management, risk evaluation or medical management. This article illustrates a PM approach that enables the economic potential of (cost-) effective disease management programs (DMPs) to be fully exploited by optimized candidate selection as an example of successful data-driven business management. The approach is based on a Generalized Linear Model (GLM) that is easy to apply for health insurance companies. By means of a small portfolio from an emerging country, we show that our GLM approach is stable compared to more sophisticated regression techniques in spite of the difficult data environment. Additionally, we demonstrate for this example of a setting that our model can compete with the expensive solutions offered by professional PM vendors and outperforms non-predictive standard approaches for DMP selection commonly used in the market.

  7. An overview of quantitative approaches in Gestalt perception.

    PubMed

    Jäkel, Frank; Singh, Manish; Wichmann, Felix A; Herzog, Michael H

    2016-09-01

    Gestalt psychology is often criticized as lacking quantitative measurements and precise mathematical models. While this is true of the early Gestalt school, today there are many quantitative approaches in Gestalt perception and the special issue of Vision Research "Quantitative Approaches in Gestalt Perception" showcases the current state-of-the-art. In this article we give an overview of these current approaches. For example, ideal observer models are one of the standard quantitative tools in vision research and there is a clear trend to try and apply this tool to Gestalt perception and thereby integrate Gestalt perception into mainstream vision research. More generally, Bayesian models, long popular in other areas of vision research, are increasingly being employed to model perceptual grouping as well. Thus, although experimental and theoretical approaches to Gestalt perception remain quite diverse, we are hopeful that these quantitative trends will pave the way for a unified theory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Community Multiscale Air Quality Model

    EPA Science Inventory

    The U.S. EPA developed the Community Multiscale Air Quality (CMAQ) system to apply a “one atmosphere” multiscale and multi-pollutant modeling approach based mainly on the “first principles” description of the atmosphere. The multiscale capability is supported by the governing di...

  9. HIGH-RESOLUTION DATASET OF URBAN CANOPY PARAMETERS FOR HOUSTON, TEXAS

    EPA Science Inventory

    Urban dispersion and air quality simulation models applied at various horizontal scales require different levels of fidelity for specifying the characteristics of the underlying surfaces. As the modeling scales approach the neighborhood level (~1 km horizontal grid spacing), the...

  10. Unsupervised chunking based on graph propagation from bilingual corpus.

    PubMed

    Zhu, Ling; Wong, Derek F; Chao, Lidia S

    2014-01-01

    This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.

  11. A spatial panel ordered-response model with application to the analysis of urban land-use development intensity patterns

    NASA Astrophysics Data System (ADS)

    Ferdous, Nazneen; Bhat, Chandra R.

    2013-01-01

    This paper proposes and estimates a spatial panel ordered-response probit model with temporal autoregressive error terms to analyze changes in urban land development intensity levels over time. Such a model structure maintains a close linkage between the land owner's decision (unobserved to the analyst) and the land development intensity level (observed by the analyst) and accommodates spatial interactions between land owners that lead to spatial spillover effects. In addition, the model structure incorporates spatial heterogeneity as well as spatial heteroscedasticity. The resulting model is estimated using a composite marginal likelihood (CML) approach that does not require any simulation machinery and that can be applied to data sets of any size. A simulation exercise indicates that the CML approach recovers the model parameters very well, even in the presence of high spatial and temporal dependence. In addition, the simulation results demonstrate that ignoring spatial dependency and spatial heterogeneity when both are actually present will lead to bias in parameter estimation. A demonstration exercise applies the proposed model to examine urban land development intensity levels using parcel-level data from Austin, Texas.

  12. Effort to Accelerate MBSE Adoption and Usage at JSC

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Izygon, Michel; Okron, Shira; Garner, Larry; Wagner, Howard

    2016-01-01

    This paper describes the authors' experience in adopting Model Based System Engineering (MBSE) at the NASA/Johnson Space Center (JSC). Since 2009, NASA/JSC has been applying MBSE using the Systems Modeling Language (SysML) to a number of advanced projects. Models integrate views of the system from multiple perspectives, capturing the system design information for multiple stakeholders. This method has allowed engineers to better control changes, improve traceability from requirements to design and manage the numerous interactions between components. As the project progresses, the models become the official source of information and used by multiple stakeholders. Three major types of challenges that hamper the adoption of the MBSE technology are described. These challenges are addressed by a multipronged approach that includes educating the main stakeholders, implementing an organizational infrastructure that supports the adoption effort, defining a set of modeling guidelines to help engineers in their modeling effort, providing a toolset that support the generation of valuable products, and providing a library of reusable models. JSC project case studies are presented to illustrate how the proposed approach has been successfully applied.

  13. Multi-parametric variational data assimilation for hydrological forecasting

    NASA Astrophysics Data System (ADS)

    Alvarado-Montero, R.; Schwanenberg, D.; Krahe, P.; Helmke, P.; Klein, B.

    2017-12-01

    Ensemble forecasting is increasingly applied in flow forecasting systems to provide users with a better understanding of forecast uncertainty and consequently to take better-informed decisions. A common practice in probabilistic streamflow forecasting is to force deterministic hydrological model with an ensemble of numerical weather predictions. This approach aims at the representation of meteorological uncertainty but neglects uncertainty of the hydrological model as well as its initial conditions. Complementary approaches use probabilistic data assimilation techniques to receive a variety of initial states or represent model uncertainty by model pools instead of single deterministic models. This paper introduces a novel approach that extends a variational data assimilation based on Moving Horizon Estimation to enable the assimilation of observations into multi-parametric model pools. It results in a probabilistic estimate of initial model states that takes into account the parametric model uncertainty in the data assimilation. The assimilation technique is applied to the uppermost area of River Main in Germany. We use different parametric pools, each of them with five parameter sets, to assimilate streamflow data, as well as remotely sensed data from the H-SAF project. We assess the impact of the assimilation in the lead time performance of perfect forecasts (i.e. observed data as forcing variables) as well as deterministic and probabilistic forecasts from ECMWF. The multi-parametric assimilation shows an improvement of up to 23% for CRPS performance and approximately 20% in Brier Skill Scores with respect to the deterministic approach. It also improves the skill of the forecast in terms of rank histogram and produces a narrower ensemble spread.

  14. Modal Substructuring of Geometrically Nonlinear Finite-Element Models

    DOE PAGES

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    2015-12-21

    The efficiency of a modal substructuring method depends on the component modes used to reduce each subcomponent model. Methods such as Craig–Bampton have been used extensively to reduce linear finite-element models with thousands or even millions of degrees of freedom down orders of magnitude while maintaining acceptable accuracy. A novel reduction method is proposed here for geometrically nonlinear finite-element models using the fixed-interface and constraint modes of the linearized system to reduce each subcomponent model. The geometric nonlinearity requires an additional cubic and quadratic polynomial function in the modal equations, and the nonlinear stiffness coefficients are determined by applying amore » series of static loads and using the finite-element code to compute the response. The geometrically nonlinear, reduced modal equations for each subcomponent are then coupled by satisfying compatibility and force equilibrium. This modal substructuring approach is an extension of the Craig–Bampton method and is readily applied to geometrically nonlinear models built directly within commercial finite-element packages. The efficiency of this new approach is demonstrated on two example problems: one that couples two geometrically nonlinear beams at a shared rotational degree of freedom, and another that couples an axial spring element to the axial degree of freedom of a geometrically nonlinear beam. The nonlinear normal modes of the assembled models are compared with those of a truth model to assess the accuracy of the novel modal substructuring approach.« less

  15. Modal Substructuring of Geometrically Nonlinear Finite-Element Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuether, Robert J.; Allen, Matthew S.; Hollkamp, Joseph J.

    The efficiency of a modal substructuring method depends on the component modes used to reduce each subcomponent model. Methods such as Craig–Bampton have been used extensively to reduce linear finite-element models with thousands or even millions of degrees of freedom down orders of magnitude while maintaining acceptable accuracy. A novel reduction method is proposed here for geometrically nonlinear finite-element models using the fixed-interface and constraint modes of the linearized system to reduce each subcomponent model. The geometric nonlinearity requires an additional cubic and quadratic polynomial function in the modal equations, and the nonlinear stiffness coefficients are determined by applying amore » series of static loads and using the finite-element code to compute the response. The geometrically nonlinear, reduced modal equations for each subcomponent are then coupled by satisfying compatibility and force equilibrium. This modal substructuring approach is an extension of the Craig–Bampton method and is readily applied to geometrically nonlinear models built directly within commercial finite-element packages. The efficiency of this new approach is demonstrated on two example problems: one that couples two geometrically nonlinear beams at a shared rotational degree of freedom, and another that couples an axial spring element to the axial degree of freedom of a geometrically nonlinear beam. The nonlinear normal modes of the assembled models are compared with those of a truth model to assess the accuracy of the novel modal substructuring approach.« less

  16. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  17. 2D hybrid analysis: Approach for building three-dimensional atomic model by electron microscopy image matching.

    PubMed

    Matsumoto, Atsushi; Miyazaki, Naoyuki; Takagi, Junichi; Iwasaki, Kenji

    2017-03-23

    In this study, we develop an approach termed "2D hybrid analysis" for building atomic models by image matching from electron microscopy (EM) images of biological molecules. The key advantage is that it is applicable to flexible molecules, which are difficult to analyze by 3DEM approach. In the proposed approach, first, a lot of atomic models with different conformations are built by computer simulation. Then, simulated EM images are built from each atomic model. Finally, they are compared with the experimental EM image. Two kinds of models are used as simulated EM images: the negative stain model and the simple projection model. Although the former is more realistic, the latter is adopted to perform faster computations. The use of the negative stain model enables decomposition of the averaged EM images into multiple projection images, each of which originated from a different conformation or orientation. We apply this approach to the EM images of integrin to obtain the distribution of the conformations, from which the pathway of the conformational change of the protein is deduced.

  18. From an animal model to human patients: An example of a translational study on obsessive compulsive disorder (OCD).

    PubMed

    Eilam, David

    2017-05-01

    The application of similar analyses enables a direct projection from translational research in animals to human studies. Following is an example of how the methodology of a specific animal model of obsessive-compulsive disorder (OCD) was applied to study human patients. Specifically, the quinpirole rat model for OCD was based on analyzing the trajectories of travel among different locales, and scoring the set of acts performed at each locale. Applying this analytic approach in human patients unveiled various aspects of OCD, such as the repetition and addition of acts, incompleteness, and the link between behavior and specific locations. It is also illustrated how the same analytical approach could be applicable to studying other mental disorders. Finally, it is suggested that the development of OCD could be explained by the four-phase sequence of Repetition, Addition, Condensation, and Elimination, as outlined in the study of ontogeny and phylogeny and applied to normal development of behavior. In OCD, this sequence is curtailed, resulting in the abundant repetition and addition of acts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Understanding immunology via engineering design: the role of mathematical prototyping.

    PubMed

    Klinke, David J; Wang, Qing

    2012-01-01

    A major challenge in immunology is how to translate data into knowledge given the inherent complexity and dynamics of human physiology. Both the physiology and engineering communities have rich histories in applying computational approaches to translate data obtained from complex systems into knowledge of system behavior. However, there are some differences in how disciplines approach problems. By referring to mathematical models as mathematical prototypes, we aim to highlight aspects related to the process (i.e., prototyping) rather than the product (i.e., the model). The objective of this paper is to review how two related engineering concepts, specifically prototyping and "fitness for use," can be applied to overcome the pressing challenge in translating data into improved knowledge of basic immunology that can be used to improve therapies for disease. These concepts are illustrated using two immunology-related examples. The prototypes presented focus on the beta cell mass at the onset of type 1 diabetes and the dynamics of dendritic cells in the lung. This paper is intended to illustrate some of the nuances associated with applying mathematical modeling to improve understanding of the dynamics of disease progression in humans.

  20. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension.

    PubMed

    Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru

    2009-04-27

    Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.

  1. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  2. Continuity-based model interfacing for plant-wide simulation: a general approach.

    PubMed

    Volcke, Eveline I P; van Loosdrecht, Mark C M; Vanrolleghem, Peter A

    2006-08-01

    In plant-wide simulation studies of wastewater treatment facilities, often existing models from different origin need to be coupled. However, as these submodels are likely to contain different state variables, their coupling is not straightforward. The continuity-based interfacing method (CBIM) provides a general framework to construct model interfaces for models of wastewater systems, taking into account conservation principles. In this contribution, the CBIM approach is applied to study the effect of sludge digestion reject water treatment with a SHARON-Anammox process on a plant-wide scale. Separate models were available for the SHARON process and for the Anammox process. The Benchmark simulation model no. 2 (BSM2) is used to simulate the behaviour of the complete WWTP including sludge digestion. The CBIM approach is followed to develop three different model interfaces. At the same time, the generally applicable CBIM approach was further refined and particular issues when coupling models in which pH is considered as a state variable, are pointed out.

  3. Reproducibility of objectively measured physical activity and sedentary time over two seasons in children; Comparing a day-by-day and a week-by-week approach

    PubMed Central

    Andersen, Lars Bo; Skrede, Turid; Ekelund, Ulf; Anderssen, Sigmund Alfred; Resaland, Geir Kåre

    2017-01-01

    Introduction Knowledge of reproducibility of accelerometer-determined physical activity (PA) and sedentary time (SED) estimates are a prerequisite to conduct high-quality epidemiological studies. Yet, estimates of reproducibility might differ depending on the approach used to analyze the data. The aim of the present study was to determine the reproducibility of objectively measured PA and SED in children by directly comparing a day-by-day and a week-by-week approach to data collected over two weeks during two different seasons 3–4 months apart. Methods 676 11-year-old children from the Active Smarter Kids study conducted in Sogn og Fjordane county, Norway, performed 7 days of accelerometer monitoring (ActiGraph GT3X+) during January-February and April-May 2015. Reproducibility was calculated using a day-by-day and a week-by-week approach applying mixed effect modelling and the Spearman Brown prophecy formula, and reported using intra-class correlation (ICC), Bland Altman plots and 95% limits of agreement (LoA). Results Applying a week-by-week approach, no variables provided ICC estimates ≥ 0.70 for one week of measurement in any model (ICC = 0.29–0.66 not controlling for season; ICC = 0.49–0.67 when controlling for season). LoA for these models approximated a factor of 1.3–1.7 of the sample PA level standard deviations. Compared to the week-by-week approach, the day-by-day approach resulted in too optimistic reliability estimates (ICC = 0.62–0.77 not controlling for season; ICC = 0.64–0.77 when controlling for season). Conclusions Reliability is lower when analyzed over different seasons and when using a week-by-week approach, than when applying a day-by-day approach and the Spearman Brown prophecy formula to estimate reliability over a short monitoring period. We suggest a day-by-day approach and the Spearman Brown prophecy formula to determine reliability be used with caution. Trial Registration The study is registered in Clinicaltrials.gov 7th April 2014 with identification number NCT02132494. PMID:29216318

  4. A general method for radio spectrum efficiency defining

    NASA Astrophysics Data System (ADS)

    Ramadanovic, Ljubomir M.

    1986-08-01

    A general method for radio spectrum efficiency defining is proposed. Although simple it can be applied to various radio services. The concept of spectral elements, as information carriers, is introduced to enable the organization of larger spectral spaces - radio network models - characteristic for a particular radio network. The method is applied to some radio network models, concerning cellular radio telephone systems and digital radio relay systems, to verify its unified approach capability. All discussed radio services operate continuously.

  5. Testing the Model: A Phase 1/11 Randomized Double Blind Placebo Control Trial of Targeted Therapeutics: Liposomal Glutathione and Curcumin

    DTIC Science & Technology

    2016-10-01

    Can non- specific cellular immunity protect HIV-infected persons with very low CD4 counts? Presented at Conference on Integrating Psychology and...Under Review. 50. Nierenberg B, Cooper S, Feuer SJ, Broderick G. Applying Network Medicine to Chronic Illness: A Model for Integrating Psychology ...function in these subjects as compared to GW era sedentary healthy controls. We applied an integrative systems- based approach rooted in computational

  6. Testing the Model: A Phase 1/11 Randomized Double Blind Placebo Control Trial of Targeted Therapeutics: Liposomal Glutathione and Curcumin

    DTIC Science & Technology

    2017-10-01

    HIV-infected persons with very low CD4 counts? Presented at Conference on Integrating Psychology and Medicine, Waheki Island, Auckland, NZ, 10-12th...SJ, Broderick G. Applying Network Medicine to Chronic Illness: A Model for Integrating Psychology into Routine Care. Amer Psych, 2015. Under review...function in these subjects as compared to GW era sedentary healthy controls. We applied an integrative systems- based approach rooted in

  7. An engineering design approach to systems biology.

    PubMed

    Janes, Kevin A; Chandran, Preethi L; Ford, Roseanne M; Lazzara, Matthew J; Papin, Jason A; Peirce, Shayn M; Saucerman, Jeffrey J; Lauffenburger, Douglas A

    2017-07-17

    Measuring and modeling the integrated behavior of biomolecular-cellular networks is central to systems biology. Over several decades, systems biology has been shaped by quantitative biologists, physicists, mathematicians, and engineers in different ways. However, the basic and applied versions of systems biology are not typically distinguished, which blurs the separate aspirations of the field and its potential for real-world impact. Here, we articulate an engineering approach to systems biology, which applies educational philosophy, engineering design, and predictive models to solve contemporary problems in an age of biomedical Big Data. A concerted effort to train systems bioengineers will provide a versatile workforce capable of tackling the diverse challenges faced by the biotechnological and pharmaceutical sectors in a modern, information-dense economy.

  8. Combining molecular dynamics and an electrodiffusion model to calculate ion channel conductance

    NASA Astrophysics Data System (ADS)

    Wilson, Michael A.; Nguyen, Thuy Hien; Pohorille, Andrew

    2014-12-01

    Establishing the relation between the structures and functions of protein ion channels, which are protein assemblies that facilitate transmembrane ion transport through water-filled pores, is at the forefront of biological and medical sciences. A reliable way to determine whether our understanding of this relation is satisfactory is to reproduce the measured ionic conductance over a broad range of applied voltages. This can be done in molecular dynamics simulations by way of applying an external electric field to the system and counting the number of ions that traverse the channel per unit time. Since this approach is computationally very expensive we develop a markedly more efficient alternative in which molecular dynamics is combined with an electrodiffusion equation. This alternative approach applies if steady-state ion transport through channels can be described with sufficient accuracy by the one-dimensional diffusion equation in the potential given by the free energy profile and applied voltage. The theory refers only to line densities of ions in the channel and, therefore, avoids ambiguities related to determining the surface area of the channel near its endpoints or other procedures connecting the line and bulk ion densities. We apply the theory to a simple, model system based on the trichotoxin channel. We test the assumptions of the electrodiffusion equation, and determine the precision and consistency of the calculated conductance. We demonstrate that it is possible to calculate current/voltage dependence and accurately reconstruct the underlying (equilibrium) free energy profile, all from molecular dynamics simulations at a single voltage. The approach developed here applies to other channels that satisfy the conditions of the electrodiffusion equation.

  9. Valuing national effects of digital health investments: an applied method.

    PubMed

    Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad

    2015-01-01

    This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.

  10. Theoretical modeling of electroosmotic flow in soft microchannels: A variational approach applied to the rectangular geometry

    NASA Astrophysics Data System (ADS)

    Sadeghi, Arman

    2018-03-01

    Modeling of fluid flow in polyelectrolyte layer (PEL)-grafted microchannels is challenging due to their two-layer nature. Hence, the pertinent studies are limited only to circular and slit geometries for which matching the solutions for inside and outside the PEL is simple. In this paper, a simple variational-based approach is presented for the modeling of fully developed electroosmotic flow in PEL-grafted microchannels by which the whole fluidic area is considered as a single porous medium of variable properties. The model is capable of being applied to microchannels of a complex cross-sectional area. As an application of the method, it is applied to a rectangular microchannel of uniform PEL properties. It is shown that modeling a rectangular channel as a slit may lead to considerable overestimation of the mean velocity especially when both the PEL and electric double layer (EDL) are thick. It is also demonstrated that the mean velocity is an increasing function of the fixed charge density and PEL thickness and a decreasing function of the EDL thickness and PEL friction coefficient. The influence of the PEL thickness on the mean velocity, however, vanishes when both the PEL thickness and friction coefficient are sufficiently high.

  11. On Interactive Teaching Model of Translation Course Based on Wechat

    ERIC Educational Resources Information Center

    Lin, Wang

    2017-01-01

    Constructivism is a theory related to knowledge and learning, focusing on learners' subjective initiative, based on which the interactive approach has been proved to play a crucial role in language learning. Accordingly, the interactive approach can also be applied to translation teaching since translation itself is a bilingual transformational…

  12. The Diploma in Science

    ERIC Educational Resources Information Center

    Lawlor, Hugh

    2010-01-01

    At the heart of the vision for the Diploma in Science is a multidisciplinary approach to learning by tackling scientific challenges and questions in applied work-related contexts. This, together with the innovative delivery model offered by a consortia approach, will bridge a significant gap in the provision of science and mathematics education.…

  13. On Direction of Dependence in Latent Variable Contexts

    ERIC Educational Resources Information Center

    von Eye, Alexander; Wiedermann, Wolfgang

    2014-01-01

    Approaches to determining direction of dependence in nonexperimental data are based on the relation between higher-than second-order moments on one side and correlation and regression models on the other. These approaches have experienced rapid development and are being applied in contexts such as research on partner violence, attention deficit…

  14. IMPROVING PARTICULATE MATTER SOURCE APPORTIONMENT FOR HEALTH STUDIES: A TRAINED RECEPTOR MODELING APPROACH WITH SENSITIVITY, UNCERTAINTY AND SPATIAL ANALYSES

    EPA Science Inventory

    An approach for conducting PM source apportionment will be developed, tested, and applied that directly addresses limitations in current SA methods, in particular variability, biases, and intensive resource requirements. Uncertainties in SA results and sensitivities to SA inpu...

  15. Microdamage healing in asphalt and asphalt concrete, Volume 4 : a viscoelastic continuum damage fatigue model of asphalt concrete with microdamage healing

    DOT National Transportation Integrated Search

    2001-06-01

    A mechanistic approach to fatigue characterization of asphalt-aggregate mixtures is presented in this volume. This approach is founded on a uniaxial viscoelastic correspondence principle is applied in order to evaluate damage growth and healing in cy...

  16. AMULET: A MUlti-cLuE Approach to Image Forensics

    DTIC Science & Technology

    2014-12-31

    celebrities have been substituted in the other two pictures. 3.2.5 Choice of reliability properties Let us now apply the BBA mapping approach proposed in...Jiang, and L. Ma, “Ds evidence theory based digital image trustworthiness evaluation model,” in MINES 2009, International Conference on Multimedia

  17. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  18. Using a multinomial tree model for detecting mixtures in perceptual detection

    PubMed Central

    Chechile, Richard A.

    2014-01-01

    In the area of memory research there have been two rival approaches for memory measurement—signal detection theory (SDT) and multinomial processing trees (MPT). Both approaches provide measures for the quality of the memory representation, and both approaches provide for corrections for response bias. In recent years there has been a strong case advanced for the MPT approach because of the finding of stochastic mixtures on both target-present and target-absent tests. In this paper a case is made that perceptual detection, like memory recognition, involves a mixture of processes that are readily represented as a MPT model. The Chechile (2004) 6P memory measurement model is modified in order to apply to the case of perceptual detection. This new MPT model is called the Perceptual Detection (PD) model. The properties of the PD model are developed, and the model is applied to some existing data of a radiologist examining CT scans. The PD model brings out novel features that were absent from a standard SDT analysis. Also the topic of optimal parameter estimation on an individual-observer basis is explored with Monte Carlo simulations. These simulations reveal that the mean of the Bayesian posterior distribution is a more accurate estimator than the corresponding maximum likelihood estimator (MLE). Monte Carlo simulations also indicate that model estimates based on only the data from an individual observer can be improved upon (in the sense of being more accurate) by an adjustment that takes into account the parameter estimate based on the data pooled across all the observers. The adjustment of the estimate for an individual is discussed as an analogous statistical effect to the improvement over the individual MLE demonstrated by the James–Stein shrinkage estimator in the case of the multiple-group normal model. PMID:25018741

  19. Neural network modeling of nonlinear systems based on Volterra series extension of a linear model

    NASA Technical Reports Server (NTRS)

    Soloway, Donald I.; Bialasiewicz, Jan T.

    1992-01-01

    A Volterra series approach was applied to the identification of nonlinear systems which are described by a neural network model. A procedure is outlined by which a mathematical model can be developed from experimental data obtained from the network structure. Applications of the results to the control of robotic systems are discussed.

  20. Building Multiclass Classifiers for Remote Homology Detection and Fold Recognition

    DTIC Science & Technology

    2006-04-05

    classes. In this study we evaluate the effectiveness of one of these formulations that was developed by Crammer and Singer [9], which leads to...significantly more complex model can be learned by directly applying the Crammer -Singer multiclass formulation on the outputs of the binary classifiers...will refer to this as the Crammer -Singer (CS) model. Comparing the scaling approach to the Crammer -Singer approach we can see that the Crammer -Singer

  1. Increasing the Career Choice Readiness of Young Adolescents: An Evaluation Study

    ERIC Educational Resources Information Center

    Hirschi, Andreas; Lage, Damian

    2008-01-01

    A career workshop that applies models of the Cognitive Information Processing Approach (Sampson, Reardon, Peterson, & Lenz, 2004) and incorporates critical ingredients (Brown and Ryan Krane, 2000) to promote the career choice readiness of young adolescents was developed and evaluated with 334 Swiss students in seventh grade applying a Solomon…

  2. Ontological Relations and the Capability Maturity Model Applied in Academia

    ERIC Educational Resources Information Center

    de Oliveira, Jerônimo Moreira; Campoy, Laura Gómez; Vilarino, Lilian

    2015-01-01

    This work presents a new approach to the discovery, identification and connection of ontological elements within the domain of characterization in learning organizations. In particular, the study can be applied to contexts where organizations require planning, logic, balance, and cognition in knowledge creation scenarios, which is the case for the…

  3. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  4. Applying Coaching Strategies to Support Youth- and Family-Focused Extension Programming

    ERIC Educational Resources Information Center

    Olson, Jonathan R.; Hawkey, Kyle R.; Smith, Burgess; Perkins, Daniel F.; Borden, Lynne M.

    2016-01-01

    In this article, we describe how a peer-coaching model has been applied to support community-based Extension programming through the Children, Youth, and Families at Risk (CYFAR) initiative. We describe the general approaches to coaching that have been used to help with CYFAR program implementation, evaluation, and sustainability efforts; we…

  5. Development and Modification of a Response Class via Positive and Negative Reinforcement: A Translational Approach

    ERIC Educational Resources Information Center

    Mendres, Amber E.; Borrero, John C.

    2010-01-01

    When responses function to produce the same reinforcer, a response class exists. Researchers have examined response classes in applied settings; however, the challenges associated with conducting applied research on response class development have recently necessitated the development of an analogue response class model. To date, little research…

  6. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  7. Application of fuzzy AHP method to IOCG prospectivity mapping: A case study in Taherabad prospecting area, eastern Iran

    NASA Astrophysics Data System (ADS)

    Najafi, Ali; Karimpour, Mohammad Hassan; Ghaderi, Majid

    2014-12-01

    Using fuzzy analytical hierarchy process (AHP) technique, we propose a method for mineral prospectivity mapping (MPM) which is commonly used for exploration of mineral deposits. The fuzzy AHP is a popular technique which has been applied for multi-criteria decision-making (MCDM) problems. In this paper we used fuzzy AHP and geospatial information system (GIS) to generate prospectivity model for Iron Oxide Copper-Gold (IOCG) mineralization on the basis of its conceptual model and geo-evidence layers derived from geological, geochemical, and geophysical data in Taherabad area, eastern Iran. The FuzzyAHP was used to determine the weights belonging to each criterion. Three geoscientists knowledge on exploration of IOCG-type mineralization have been applied to assign weights to evidence layers in fuzzy AHP MPM approach. After assigning normalized weights to all evidential layers, fuzzy operator was applied to integrate weighted evidence layers. Finally for evaluating the ability of the applied approach to delineate reliable target areas, locations of known mineral deposits in the study area were used. The results demonstrate the acceptable outcomes for IOCG exploration.

  8. Modeling the forces of cutting with scissors.

    PubMed

    Mahvash, Mohsen; Voo, Liming M; Kim, Diana; Jeung, Kristin; Wainer, Joshua; Okamura, Allison M

    2008-03-01

    Modeling forces applied to scissors during cutting of biological materials is useful for surgical simulation. Previous approaches to haptic display of scissor cutting are based on recording and replaying measured data. This paper presents an analytical model based on the concepts of contact mechanics and fracture mechanics to calculate forces applied to scissors during cutting of a slab of material. The model considers the process of cutting as a sequence of deformation and fracture phases. During deformation phases, forces applied to the scissors are calculated from a torque-angle response model synthesized from measurement data multiplied by a ratio that depends on the position of the cutting crack edge and the curve of the blades. Using the principle of conservation of energy, the forces of fracture are related to the fracture toughness of the material and the geometry of the blades of the scissors. The forces applied to scissors generally include high-frequency fluctuations. We show that the analytical model accurately predicts the average applied force. The cutting model is computationally efficient, so it can be used for real-time computations such as haptic rendering. Experimental results from cutting samples of paper, plastic, cloth, and chicken skin confirm the model, and the model is rendered in a haptic virtual environment.

  9. Randomly iterated search and statistical competency as powerful inversion tools for deformation source modeling: Application to volcano interferometric synthetic aperture radar data

    NASA Astrophysics Data System (ADS)

    Shirzaei, M.; Walter, T. R.

    2009-10-01

    Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.

  10. A modelling methodology to assess the effect of insect pest control on agro-ecosystems.

    PubMed

    Wan, Nian-Feng; Ji, Xiang-Yun; Jiang, Jie-Xian; Li, Bo

    2015-04-23

    The extensive use of chemical pesticides for pest management in agricultural systems can entail risks to the complex ecosystems consisting of economic, ecological and social subsystems. To analyze the negative and positive effects of external or internal disturbances on complex ecosystems, we proposed an ecological two-sidedness approach which has been applied to the design of pest-controlling strategies for pesticide pollution management. However, catastrophe theory has not been initially applied to this approach. Thus, we used an approach of integrating ecological two-sidedness with a multi-criterion evaluation method of catastrophe theory to analyze the complexity of agro-ecosystems disturbed by the insecticides and screen out the best insect pest-controlling strategy in cabbage production. The results showed that the order of the values of evaluation index (RCC/CP) for three strategies in cabbage production was "applying frequency vibration lamps and environment-friendly insecticides 8 times" (0.80) < "applying trap devices and environment-friendly insecticides 9 times" (0.83) < "applying common insecticides 14 times" (1.08). The treatment "applying frequency vibration lamps and environment-friendly insecticides 8 times" was considered as the best insect pest-controlling strategy in cabbage production in Shanghai, China.

  11. A modelling methodology to assess the effect of insect pest control on agro-ecosystems

    PubMed Central

    Wan, Nian-Feng; Ji, Xiang-Yun; Jiang, Jie-Xian; Li, Bo

    2015-01-01

    The extensive use of chemical pesticides for pest management in agricultural systems can entail risks to the complex ecosystems consisting of economic, ecological and social subsystems. To analyze the negative and positive effects of external or internal disturbances on complex ecosystems, we proposed an ecological two-sidedness approach which has been applied to the design of pest-controlling strategies for pesticide pollution management. However, catastrophe theory has not been initially applied to this approach. Thus, we used an approach of integrating ecological two-sidedness with a multi-criterion evaluation method of catastrophe theory to analyze the complexity of agro-ecosystems disturbed by the insecticides and screen out the best insect pest-controlling strategy in cabbage production. The results showed that the order of the values of evaluation index (RCC/CP) for three strategies in cabbage production was “applying frequency vibration lamps and environment-friendly insecticides 8 times” (0.80) < “applying trap devices and environment-friendly insecticides 9 times” (0.83) < “applying common insecticides 14 times” (1.08). The treatment “applying frequency vibration lamps and environment-friendly insecticides 8 times” was considered as the best insect pest-controlling strategy in cabbage production in Shanghai, China. PMID:25906199

  12. Development of a pheromone elution rate physical model

    USDA-ARS?s Scientific Manuscript database

    A first principle modeling approach is applied to available data describing the elution of semiochemicals from pheromone dispensers. These data include field data for 27 products developed by several manufacturers, including homemade devices, as well as laboratory data collected on three semiochemi...

  13. An Approach to the Evaluation of Hypermedia.

    ERIC Educational Resources Information Center

    Knussen, Christina; And Others

    1991-01-01

    Discusses methods that may be applied to the evaluation of hypermedia, based on six models described by Lawton. Techniques described include observation, self-report measures, interviews, automated measures, psychometric tests, checklists and criterion-based techniques, process models, Experimentally Measuring Usability (EMU), and a naturalistic…

  14. A systems modeling methodology for evaluation of vehicle aggressivity in the automotive accident environment

    DOT National Transportation Integrated Search

    2001-03-05

    A systems modeling approach is presented for assessment of harm in the automotive accident environment. The methodology is presented in general form and then applied to evaluate vehicle aggressivity in frontal crashes. The methodology consists of par...

  15. APPLICATION OF BENCHMARK DOSE METHODOLOGY TO DATA FROM PRENATAL DEVELOPMENTAL TOXICITY STUDIES

    EPA Science Inventory

    The benchmark dose (BMD) concept was applied to 246 conventional developmental toxicity datasets from government, industry and commercial laboratories. Five modeling approaches were used, two generic and three specific to developmental toxicity (DT models). BMDs for both quantal ...

  16. Bayesian structural equation modeling: a more flexible representation of substantive theory.

    PubMed

    Muthén, Bengt; Asparouhov, Tihomir

    2012-09-01

    This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

  17. The Prediction of the Gas Utilization Ratio Based on TS Fuzzy Neural Network and Particle Swarm Optimization

    PubMed Central

    Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-01-01

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control. PMID:29461469

  18. The Prediction of the Gas Utilization Ratio based on TS Fuzzy Neural Network and Particle Swarm Optimization.

    PubMed

    Zhang, Sen; Jiang, Haihe; Yin, Yixin; Xiao, Wendong; Zhao, Baoyong

    2018-02-20

    Gas utilization ratio (GUR) is an important indicator that is used to evaluate the energy consumption of blast furnaces (BFs). Currently, the existing methods cannot predict the GUR accurately. In this paper, we present a novel data-driven model for predicting the GUR. The proposed approach utilized both the TS fuzzy neural network (TS-FNN) and the particle swarm algorithm (PSO) to predict the GUR. The particle swarm algorithm (PSO) is applied to optimize the parameters of the TS-FNN in order to decrease the error caused by the inaccurate initial parameter. This paper also applied the box graph (Box-plot) method to eliminate the abnormal value of the raw data during the data preprocessing. This method can deal with the data which does not obey the normal distribution which is caused by the complex industrial environments. The prediction results demonstrate that the optimization model based on PSO and the TS-FNN approach achieves higher prediction accuracy compared with the TS-FNN model and SVM model and the proposed approach can accurately predict the GUR of the blast furnace, providing an effective way for the on-line blast furnace distribution control.

  19. Two-layer symbolic representation for stochastic models with phase-type distributed events

    NASA Astrophysics Data System (ADS)

    Longo, Francesco; Scarpa, Marco

    2015-07-01

    Among the techniques that have been proposed for the analysis of non-Markovian models, the state space expansion approach showed great flexibility in terms of modelling capacities.The principal drawback is the explosion of the state space. This paper proposes a two-layer symbolic method for efficiently storing the expanded reachability graph of a non-Markovian model in the case in which continuous phase-type distributions are associated with the firing times of system events, and different memory policies are considered. At the lower layer, the reachability graph is symbolically represented in the form of a set of Kronecker matrices, while, at the higher layer, all the information needed to correctly manage event memory is stored in a multi-terminal multi-valued decision diagram. Such an information is collected by applying a symbolic algorithm, which is based on a couple of theorems. The efficiency of the proposed approach, in terms of memory occupation and execution time, is shown by applying it to a set of non-Markovian stochastic Petri nets and comparing it with a classical explicit expansion algorithm. Moreover, a comparison with a classical symbolic approach is performed whenever possible.

  20. Developing a local least-squares support vector machines-based neuro-fuzzy model for nonlinear and chaotic time series prediction.

    PubMed

    Miranian, A; Abdollahzade, M

    2013-02-01

    Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.

  1. Predicting chemical bioavailability using microarray gene expression data and regression modeling: A tale of three explosive compounds.

    PubMed

    Gong, Ping; Nan, Xiaofei; Barker, Natalie D; Boyd, Robert E; Chen, Yixin; Wilkins, Dawn E; Johnson, David R; Suedel, Burton C; Perkins, Edward J

    2016-03-08

    Chemical bioavailability is an important dose metric in environmental risk assessment. Although many approaches have been used to evaluate bioavailability, not a single approach is free from limitations. Previously, we developed a new genomics-based approach that integrated microarray technology and regression modeling for predicting bioavailability (tissue residue) of explosives compounds in exposed earthworms. In the present study, we further compared 18 different regression models and performed variable selection simultaneously with parameter estimation. This refined approach was applied to both previously collected and newly acquired earthworm microarray gene expression datasets for three explosive compounds. Our results demonstrate that a prediction accuracy of R(2) = 0.71-0.82 was achievable at a relatively low model complexity with as few as 3-10 predictor genes per model. These results are much more encouraging than our previous ones. This study has demonstrated that our approach is promising for bioavailability measurement, which warrants further studies of mixed contamination scenarios in field settings.

  2. Optimal design of supply chain network under uncertainty environment using hybrid analytical and simulation modeling approach

    NASA Astrophysics Data System (ADS)

    Chiadamrong, N.; Piyathanavong, V.

    2017-12-01

    Models that aim to optimize the design of supply chain networks have gained more interest in the supply chain literature. Mixed-integer linear programming and discrete-event simulation are widely used for such an optimization problem. We present a hybrid approach to support decisions for supply chain network design using a combination of analytical and discrete-event simulation models. The proposed approach is based on iterative procedures until the difference between subsequent solutions satisfies the pre-determined termination criteria. The effectiveness of proposed approach is illustrated by an example, which shows closer to optimal results with much faster solving time than the results obtained from the conventional simulation-based optimization model. The efficacy of this proposed hybrid approach is promising and can be applied as a powerful tool in designing a real supply chain network. It also provides the possibility to model and solve more realistic problems, which incorporate dynamism and uncertainty.

  3. The process group approach to reliable distributed computing

    NASA Technical Reports Server (NTRS)

    Birman, Kenneth P.

    1991-01-01

    The difficulty of developing reliable distributed software is an impediment to applying distributed computing technology in many settings. Experience with the ISIS system suggests that a structured approach based on virtually synchronous process groups yields systems which are substantially easier to develop, fault-tolerance, and self-managing. Six years of research on ISIS are reviewed, describing the model, the types of applications to which ISIS was applied, and some of the reasoning that underlies a recent effort to redesign and reimplement ISIS as a much smaller, lightweight system.

  4. Calculation of short-wave signal amplitude on the basis of the waveguide approach and the method of characteristics

    NASA Astrophysics Data System (ADS)

    Mikhailov, S. Ia.; Tumatov, K. I.

    The paper compares the results obtained using two methods to calculate the amplitude of a short-wave signal field incident on or reflected from a perfectly conducting earth. A technique is presented for calculating the geometric characteristics of the field based on the waveguide approach. It is shown that applying an extended system of characteristic equations to calculate the field amplitude is inadmissible in models which include the discontinuity second derivatives of the permittivity unless a suitable treament of the discontinuity points is applied.

  5. Strip Yield Model Numerical Application to Different Geometries and Loading Conditions

    NASA Technical Reports Server (NTRS)

    Hatamleh, Omar; Forman, Royce; Shivakumar, Venkataraman; Lyons, Jed

    2006-01-01

    A new numerical method based on the strip-yield analysis approach was developed for calculating the Crack Tip Opening Displacement (CTOD). This approach can be applied for different crack configurations having infinite and finite geometries, and arbitrary applied loading conditions. The new technique adapts the boundary element / dislocation density method to obtain crack-face opening displacements at any point on a crack, and succeeds by obtaining requisite values as a series of definite integrals, the functional parts of each being evaluated exactly in a closed form.

  6. A Service Design Thinking Approach for Stakeholder-Centred eHealth.

    PubMed

    Lee, Eunji

    2016-01-01

    Studies have described the opportunities and challenges of applying service design techniques to health services, but empirical evidence on how such techniques can be implemented in the context of eHealth services is still lacking. This paper presents how a service design thinking approach can be applied for specification of an existing and new eHealth service by supporting evaluation of the current service and facilitating suggestions for the future service. We propose Service Journey Modelling Language and Service Journey Cards to engage stakeholders in the design of eHealth services.

  7. Assessing first-order emulator inference for physical parameters in nonlinear mechanistic models

    USGS Publications Warehouse

    Hooten, Mevin B.; Leeds, William B.; Fiechter, Jerome; Wikle, Christopher K.

    2011-01-01

    We present an approach for estimating physical parameters in nonlinear models that relies on an approximation to the mechanistic model itself for computational efficiency. The proposed methodology is validated and applied in two different modeling scenarios: (a) Simulation and (b) lower trophic level ocean ecosystem model. The approach we develop relies on the ability to predict right singular vectors (resulting from a decomposition of computer model experimental output) based on the computer model input and an experimental set of parameters. Critically, we model the right singular vectors in terms of the model parameters via a nonlinear statistical model. Specifically, we focus our attention on first-order models of these right singular vectors rather than the second-order (covariance) structure.

  8. Microscale Modeling of Porous Thermal Protection System Materials

    NASA Astrophysics Data System (ADS)

    Stern, Eric C.

    Ablative thermal protection system (TPS) materials play a vital role in the design of entry vehicles. Most simulation tools for ablative TPS in use today take a macroscopic approach to modeling, which involves heavy empiricism. Recent work has suggested improving the fidelity of the simulations by taking a multi-scale approach to the physics of ablation. In this work, a new approach for modeling ablative TPS at the microscale is proposed, and its feasibility and utility is assessed. This approach uses the Direct Simulation Monte Carlo (DSMC) method to simulate the gas flow through the microstructure, as well as the gas-surface interaction. Application of the DSMC method to this problem allows the gas phase dynamics---which are often rarefied---to be modeled to a high degree of fidelity. Furthermore this method allows for sophisticated gas-surface interaction models to be implemented. In order to test this approach for realistic materials, a method for generating artificial microstructures which emulate those found in spacecraft TPS is developed. Additionally, a novel approach for allowing the surface to move under the influence of chemical reactions at the surface is developed. This approach is shown to be efficient and robust for performing coupled simulation of the oxidation of carbon fibers. The microscale modeling approach is first applied to simulating the steady flow of gas through the porous medium. Predictions of Darcy permeability for an idealized microstructure agree with empirical correlations from the literature, as well as with predictions from computational fluid dynamics (CFD) when the continuum assumption is valid. Expected departures are observed for conditions at which the continuum assumption no longer holds. Comparisons of simulations using a fabricated microstructure to experimental data for a real spacecraft TPS material show good agreement when similar microstructural parameters are used to build the geometry. The approach is then applied to investigating the ablation of porous materials through oxidation. A simple gas surface interaction model is described, and an approach for coupling the surface reconstruction algorithm to the DSMC method is outlined. Simulations of single carbon fibers at representative conditions suggest this approach to be feasible for simulating the ablation of porous TPS materials at scale. Additionally, the effect of various simulation parameters on in-depth morphology is investigated for random fibrous microstructures.

  9. An Ensemble Approach for Drug Side Effect Prediction

    PubMed Central

    Jahid, Md Jamiul; Ruan, Jianhua

    2014-01-01

    In silico prediction of drug side-effects in early stage of drug development is becoming more popular now days, which not only reduces the time for drug design but also reduces the drug development costs. In this article we propose an ensemble approach to predict drug side-effects of drug molecules based on their chemical structure. Our idea originates from the observation that similar drugs have similar side-effects. Based on this observation we design an ensemble approach that combine the results from different classification models where each model is generated by a different set of similar drugs. We applied our approach to 1385 side-effects in the SIDER database for 888 drugs. Results show that our approach outperformed previously published approaches and standard classifiers. Furthermore, we applied our method to a number of uncharacterized drug molecules in DrugBank database and predict their side-effect profiles for future usage. Results from various sources confirm that our method is able to predict the side-effects for uncharacterized drugs and more importantly able to predict rare side-effects which are often ignored by other approaches. The method described in this article can be useful to predict side-effects in drug design in an early stage to reduce experimental cost and time. PMID:25327524

  10. The conversion of exposures due to radon into the effective dose: the epidemiological approach.

    PubMed

    Beck, T R

    2017-11-01

    The risks and dose conversion coefficients for residential and occupational exposures due to radon were determined with applying the epidemiological risk models to ICRP representative populations. The dose conversion coefficient for residential radon was estimated with a value of 1.6 mSv year -1 per 100 Bq m -3 (3.6 mSv per WLM), which is significantly lower than the corresponding value derived from the biokinetic and dosimetric models. The dose conversion coefficient for occupational exposures with applying the risk models for miners was estimated with a value of 14 mSv per WLM, which is in good accordance with the results of the dosimetric models. To resolve the discrepancy regarding residential radon, the ICRP approaches for the determination of risks and doses were reviewed. It could be shown that ICRP overestimates the risk for lung cancer caused by residential radon. This can be attributed to a wrong population weighting of the radon-induced risks in its epidemiological approach. With the approach in this work, the average risks for lung cancer were determined, taking into account the age-specific risk contributions of all individuals in the population. As a result, a lower risk coefficient for residential radon was obtained. The results from the ICRP biokinetic and dosimetric models for both, the occupationally exposed working age population and the whole population exposed to residential radon, can be brought in better accordance with the corresponding results of the epidemiological approach, if the respective relative radiation detriments and a radiation-weighting factor for alpha particles of about ten are used.

  11. An integrated uncertainty analysis and data assimilation approach for improved streamflow predictions

    NASA Astrophysics Data System (ADS)

    Hogue, T. S.; He, M.; Franz, K. J.; Margulis, S. A.; Vrugt, J. A.

    2010-12-01

    The current study presents an integrated uncertainty analysis and data assimilation approach to improve streamflow predictions while simultaneously providing meaningful estimates of the associated uncertainty. Study models include the National Weather Service (NWS) operational snow model (SNOW17) and rainfall-runoff model (SAC-SMA). The proposed approach uses the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) to simultaneously estimate uncertainties in model parameters, forcing, and observations. An ensemble Kalman filter (EnKF) is configured with the DREAM-identified uncertainty structure and applied to assimilating snow water equivalent data into the SNOW17 model for improved snowmelt simulations. Snowmelt estimates then serves as an input to the SAC-SMA model to provide streamflow predictions at the basin outlet. The robustness and usefulness of the approach is evaluated for a snow-dominated watershed in the northern Sierra Mountains. This presentation describes the implementation of DREAM and EnKF into the coupled SNOW17 and SAC-SMA models and summarizes study results and findings.

  12. A decision theoretical approach for diffusion promotion

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Liu, Yun

    2009-09-01

    In order to maximize cost efficiency from scarce marketing resources, marketers are facing the problem of which group of consumers to target for promotions. We propose to use a decision theoretical approach to model this strategic situation. According to one promotion model that we develop, marketers balance between probabilities of successful persuasion and the expected profits on a diffusion scale, before making their decisions. In the other promotion model, the cost for identifying influence information is considered, and marketers are allowed to ignore individual heterogeneity. We apply the proposed approach to two threshold influence models, evaluate the utility of each promotion action, and provide discussions about the best strategy. Our results show that efforts for targeting influentials or easily influenced people might be redundant under some conditions.

  13. A water balance model to estimate flow through the Old and Middle River corridor

    USGS Publications Warehouse

    Andrews, Stephen W.; Gross, Edward S.; Hutton, Paul H.

    2016-01-01

    We applied a water balance model to predict tidally averaged (subtidal) flows through the Old River and Middle River corridor in the Sacramento–San Joaquin Delta. We reviewed the dynamics that govern subtidal flows and water levels and adopted a simplified representation. In this water balance approach, we estimated ungaged flows as linear functions of known (or specified) flows. We assumed that subtidal storage within the control volume varies because of fortnightly variation in subtidal water level, Delta inflow, and barometric pressure. The water balance model effectively predicts subtidal flows and approaches the accuracy of a 1–D Delta hydrodynamic model. We explore the potential to improve the approach by representing more complex dynamics and identify possible future improvements.

  14. GIS Based Distributed Runoff Predictions in Variable Source Area Watersheds Employing the SCS-Curve Number

    NASA Astrophysics Data System (ADS)

    Steenhuis, T. S.; Mendoza, G.; Lyon, S. W.; Gerard Marchant, P.; Walter, M. T.; Schneiderman, E.

    2003-04-01

    Because the traditional Soil Conservation Service Curve Number (SCS-CN) approach continues to be ubiquitously used in GIS-BASED water quality models, new application methods are needed that are consistent with variable source area (VSA) hydrological processes in the landscape. We developed within an integrated GIS modeling environment a distributed approach for applying the traditional SCS-CN equation to watersheds where VSA hydrology is a dominant process. Spatial representation of hydrologic processes is important for watershed planning because restricting potentially polluting activities from runoff source areas is fundamental to controlling non-point source pollution. The methodology presented here uses the traditional SCS-CN method to predict runoff volume and spatial extent of saturated areas and uses a topographic index to distribute runoff source areas through watersheds. The resulting distributed CN-VSA method was incorporated in an existing GWLF water quality model and applied to sub-watersheds of the Delaware basin in the Catskill Mountains region of New York State. We found that the distributed CN-VSA approach provided a physically-based method that gives realistic results for watersheds with VSA hydrology.

  15. Predicting Future-Year Ozone Concentrations: Integrated Observational-Modeling Approach for Probabilistic Evaluation of the Efficacy of Emission Control strategies

    EPA Science Inventory

    Regional-scale air quality models are being used to demonstrate attainment of the ozone air quality standard. In current regulatory applications, a regional-scale air quality model is applied for a base year and a future year with reduced emissions using the same meteorological ...

  16. Families with Noncompliant Children: Applications of the Systemic Model.

    ERIC Educational Resources Information Center

    Neilans, Thomas H.; And Others

    This paper describes the application of a systems approach model to assessing families with a labeled noncompliant child. The first section describes and comments on the applied methodology for the model. The second section describes the classification of 61 families containing a child labeled by the family as noncompliant. An analysis of data…

  17. Relationships between water table and model simulated ET

    Treesearch

    Prem B. Parajuli; Gretchen F. Sassenrath; Ying Ouyang

    2013-01-01

    This research was conducted to develop relationships among evapotranspiration (ET), percolation (PERC), groundwater discharge to the stream (GWQ), and water table fluctuations through a modeling approach. The Soil and Water Assessment Tool (SWAT) hydrologic and crop models were applied in the Big Sunflower River watershed (BSRW; 7660 km2) within the Yazoo River Basin...

  18. 78 FR 70303 - Announcement of Requirements and Registration for the Predict the Influenza Season Challenge

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-25

    ... public. Mathematical and statistical models can be useful in predicting the timing and impact of the... applying any mathematical, statistical, or other approach to predictive modeling. This challenge will... Services (HHS) region level(s) in the United States by developing mathematical and statistical models that...

  19. Knowledge Management Model: Practical Application for Competency Development

    ERIC Educational Resources Information Center

    Lustri, Denise; Miura, Irene; Takahashi, Sergio

    2007-01-01

    Purpose: This paper seeks to present a knowledge management (KM) conceptual model for competency development and a case study in a law service firm, which implemented the KM model in a competencies development program. Design/methodology/approach: The case study method was applied according to Yin (2003) concepts, focusing a six-professional group…

  20. A 2D flood inundation model based on cellular automata approach

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Todini, Ezio

    2010-05-01

    In the past years, the cellular automata approach has been successfully applied in two-dimensional modelling of flood events. When used in experimental applications, models based on such approach have provided good results, comparable to those obtained with more complex 2D models; moreover, CA models have proven significantly faster and easier to apply than most of existing models, and these features make them a valuable tool for flood analysis especially when dealing with large areas. However, to date the real degree of accuracy of such models has not been demonstrated, since they have been mainly used in experimental applications, while very few comparisons with theoretical solutions have been made. Also, the use of an explicit scheme of solution, which is inherent in cellular automata models, forces them to work only with small time steps, thus reducing model computation speed. The present work describes a cellular automata model based on the continuity and diffusive wave equations. Several model versions based on different solution schemes have been realized and tested in a number of numerical cases, both 1D and 2D, comparing the results with theoretical and numerical solutions. In all cases, the model performed well compared to the reference solutions, and proved to be both stable and accurate. Finally, the version providing the best results in terms of stability was tested in a real flood event and compared with different hydraulic models. Again, the cellular automata model provided very good results, both in term of computational speed and reproduction of the simulated event.

  1. Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.

    PubMed Central

    Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M

    1998-01-01

    Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897

  2. Non-parametric directionality analysis - Extension for removal of a single common predictor and application to time series.

    PubMed

    Halliday, David M; Senik, Mohd Harizal; Stevenson, Carl W; Mason, Rob

    2016-08-01

    The ability to infer network structure from multivariate neuronal signals is central to computational neuroscience. Directed network analyses typically use parametric approaches based on auto-regressive (AR) models, where networks are constructed from estimates of AR model parameters. However, the validity of using low order AR models for neurophysiological signals has been questioned. A recent article introduced a non-parametric approach to estimate directionality in bivariate data, non-parametric approaches are free from concerns over model validity. We extend the non-parametric framework to include measures of directed conditional independence, using scalar measures that decompose the overall partial correlation coefficient summatively by direction, and a set of functions that decompose the partial coherence summatively by direction. A time domain partial correlation function allows both time and frequency views of the data to be constructed. The conditional independence estimates are conditioned on a single predictor. The framework is applied to simulated cortical neuron networks and mixtures of Gaussian time series data with known interactions. It is applied to experimental data consisting of local field potential recordings from bilateral hippocampus in anaesthetised rats. The framework offers a non-parametric approach to estimation of directed interactions in multivariate neuronal recordings, and increased flexibility in dealing with both spike train and time series data. The framework offers a novel alternative non-parametric approach to estimate directed interactions in multivariate neuronal recordings, and is applicable to spike train and time series data. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Modeling the internal dynamics of energy and mass transfer in an imperfectly mixed ventilated airspace.

    PubMed

    Janssens, K; Van Brecht, A; Zerihun Desta, T; Boonen, C; Berckmans, D

    2004-06-01

    The present paper outlines a modeling approach, which has been developed to model the internal dynamics of heat and moisture transfer in an imperfectly mixed ventilated airspace. The modeling approach, which combines the classical heat and moisture balance differential equations with the use of experimental time-series data, provides a physically meaningful description of the process and is very useful for model-based control purposes. The paper illustrates how the modeling approach has been applied to a ventilated laboratory test room with internal heat and moisture production. The results are evaluated and some valuable suggestions for future research are forwarded. The modeling approach outlined in this study provides an ideal form for advanced model-based control system design. The relatively low number of parameters makes it well suited for model-based control purposes, as a limited number of identification experiments is sufficient to determine these parameters. The model concept provides information about the air quality and airflow pattern in an arbitrary building. By using this model as a simulation tool, the indoor air quality and airflow pattern can be optimized.

  4. Structure-Based Druggability Assessment of the Mammalian Structural Proteome with Inclusion of Light Protein Flexibility

    PubMed Central

    Loving, Kathryn A.; Lin, Andy; Cheng, Alan C.

    2014-01-01

    Advances reported over the last few years and the increasing availability of protein crystal structure data have greatly improved structure-based druggability approaches. However, in practice, nearly all druggability estimation methods are applied to protein crystal structures as rigid proteins, with protein flexibility often not directly addressed. The inclusion of protein flexibility is important in correctly identifying the druggability of pockets that would be missed by methods based solely on the rigid crystal structure. These include cryptic pockets and flexible pockets often found at protein-protein interaction interfaces. Here, we apply an approach that uses protein modeling in concert with druggability estimation to account for light protein backbone movement and protein side-chain flexibility in protein binding sites. We assess the advantages and limitations of this approach on widely-used protein druggability sets. Applying the approach to all mammalian protein crystal structures in the PDB results in identification of 69 proteins with potential druggable cryptic pockets. PMID:25079060

  5. Functional Enzyme-Based Approach for Linking Microbial Community Functions with Biogeochemical Process Kinetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Minjing; Qian, Wei-jun; Gao, Yuqian

    The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes asmore » time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates« less

  6. Application of a hurdle negative binomial count data model to demand for bass fishing in the southeastern United States.

    PubMed

    Bilgic, Abdulbaki; Florkowski, Wojciech J

    2007-06-01

    This paper identifies factors that influence the demand for a bass fishing trip taken in the southeastern United States using a hurdle negative binomial count data model. The probability of fishing for a bass is estimated in the first stage and the fishing trip frequency is estimated in the second stage for individuals reporting bass fishing trips in the Southeast. The applied approach allows the decomposition of the effects of factors responsible for the decision to take a trip and the trip number. Calculated partial and total elasticities indicate a highly inelastic demand for the number of fishing trips as trip costs increase. However, the demand can be expected to increase if anglers experience a success measured by the number of caught fish or their size. Benefit estimates based on alternative estimation methods differ substantially, suggesting the need for testing each modeling approach applied in empirical studies.

  7. Modelling Peri-Perceptual Brain Processes in a Deep Learning Spiking Neural Network Architecture.

    PubMed

    Gholami Doborjeh, Zohreh; Kasabov, Nikola; Gholami Doborjeh, Maryam; Sumich, Alexander

    2018-06-11

    Familiarity of marketing stimuli may affect consumer behaviour at a peri-perceptual processing level. The current study introduces a method for deep learning of electroencephalogram (EEG) data using a spiking neural network (SNN) approach that reveals the complexity of peri-perceptual processes of familiarity. The method is applied to data from 20 participants viewing familiar and unfamiliar logos. The results support the potential of SNN models as novel tools in the exploration of peri-perceptual mechanisms that respond differentially to familiar and unfamiliar stimuli. Specifically, the activation pattern of the time-locked response identified by the proposed SNN model at approximately 200 milliseconds post-stimulus suggests greater connectivity and more widespread dynamic spatio-temporal patterns for familiar than unfamiliar logos. The proposed SNN approach can be applied to study other peri-perceptual or perceptual brain processes in cognitive and computational neuroscience.

  8. Nuclear quadrupole resonance lineshape analysis for different motional models: Stochastic Liouville approach

    NASA Astrophysics Data System (ADS)

    Kruk, D.; Earle, K. A.; Mielczarek, A.; Kubica, A.; Milewska, A.; Moscicki, J.

    2011-12-01

    A general theory of lineshapes in nuclear quadrupole resonance (NQR), based on the stochastic Liouville equation, is presented. The description is valid for arbitrary motional conditions (particularly beyond the valid range of perturbation approaches) and interaction strengths. It can be applied to the computation of NQR spectra for any spin quantum number and for any applied magnetic field. The treatment presented here is an adaptation of the "Swedish slow motion theory," [T. Nilsson and J. Kowalewski, J. Magn. Reson. 146, 345 (2000), 10.1006/jmre.2000.2125] originally formulated for paramagnetic systems, to NQR spectral analysis. The description is formulated for simple (Brownian) diffusion, free diffusion, and jump diffusion models. The two latter models account for molecular cooperativity effects in dense systems (such as liquids of high viscosity or molecular glasses). The sensitivity of NQR slow motion spectra to the mechanism of the motional processes modulating the nuclear quadrupole interaction is discussed.

  9. A proof for loop-law constraints in stoichiometric metabolic networks

    PubMed Central

    2012-01-01

    Background Constraint-based modeling is increasingly employed for metabolic network analysis. Its underlying assumption is that natural metabolic phenotypes can be predicted by adding physicochemical constraints to remove unrealistic metabolic flux solutions. The loopless-COBRA approach provides an additional constraint that eliminates thermodynamically infeasible internal cycles (or loops) from the space of solutions. This allows the prediction of flux solutions that are more consistent with experimental data. However, it is not clear if this approach over-constrains the models by removing non-loop solutions as well. Results Here we apply Gordan’s theorem from linear algebra to prove for the first time that the constraints added in loopless-COBRA do not over-constrain the problem beyond the elimination of the loops themselves. Conclusions The loopless-COBRA constraints can be reliably applied. Furthermore, this proof may be adapted to evaluate the theoretical soundness for other methods in constraint-based modeling. PMID:23146116

  10. A semiparametric graphical modelling approach for large-scale equity selection

    PubMed Central

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507

  11. Constraint-Based Local Search for Constrained Optimum Paths Problems

    NASA Astrophysics Data System (ADS)

    Pham, Quang Dung; Deville, Yves; van Hentenryck, Pascal

    Constrained Optimum Path (COP) problems arise in many real-life applications and are ubiquitous in communication networks. They have been traditionally approached by dedicated algorithms, which are often hard to extend with side constraints and to apply widely. This paper proposes a constraint-based local search (CBLS) framework for COP applications, bringing the compositionality, reuse, and extensibility at the core of CBLS and CP systems. The modeling contribution is the ability to express compositional models for various COP applications at a high level of abstraction, while cleanly separating the model and the search procedure. The main technical contribution is a connected neighborhood based on rooted spanning trees to find high-quality solutions to COP problems. The framework, implemented in COMET, is applied to Resource Constrained Shortest Path (RCSP) problems (with and without side constraints) and to the edge-disjoint paths problem (EDP). Computational results show the potential significance of the approach.

  12. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  13. Past, present and prospect of an Artificial Intelligence (AI) based model for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher

    2016-10-01

    An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.

  14. Testing the multidimensionality of the inventory of school motivation in a Dutch student sample.

    PubMed

    Korpershoek, Hanke; Xu, Kun; Mok, Magdalena Mo Ching; McInerney, Dennis M; van der Werf, Greetje

    2015-01-01

    A factor analytic and a Rasch measurement approach were applied to evaluate the multidimensional nature of the school motivation construct among more than 7,000 Dutch secondary school students. The Inventory of School Motivation (McInerney and Ali, 2006) was used, which intends to measure four motivation dimensions (mastery, performance, social, and extrinsic motivation), each comprising of two first-order factors. One unidimensional model and three multidimensional models (4-factor, 8-factor, higher order) were fit to the data. Results of both approaches showed that the multidimensional models validly represented the school motivation among Dutch secondary school pupils, whereas model fit of the unidimensional model was poor. The differences in model fit between the three multidimensional models were small, although a different model was favoured by the two approaches. The need for improvement of some of the items and the need to increase measurement precision of several first-order factors are discussed.

  15. Probabilistic flood damage modelling at the meso-scale

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2014-05-01

    Decisions on flood risk management and adaptation are usually based on risk analyses. Such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments. Most damage models have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood damage models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we show how the model BT-FLEMO (Bagging decision Tree based Flood Loss Estimation MOdel) can be applied on the meso-scale, namely on the basis of ATKIS land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany. The application of BT-FLEMO provides a probability distribution of estimated damage to residential buildings per municipality. Validation is undertaken on the one hand via a comparison with eight other damage models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official damage data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of damage estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation model BT-FLEMO is that it inherently provides quantitative information about the uncertainty of the prediction. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64.

  16. TU-G-210-02: TRANS-FUSIMO - An Integrative Approach to Model-Based Treatment Planning of Liver FUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preusser, T.

    Modeling can play a vital role in predicting, optimizing and analyzing the results of therapeutic ultrasound treatments. Simulating the propagating acoustic beam in various targeted regions of the body allows for the prediction of the resulting power deposition and temperature profiles. In this session we will apply various modeling approaches to breast, abdominal organ and brain treatments. Of particular interest is the effectiveness of procedures for correcting for phase aberrations caused by intervening irregular tissues, such as the skull in transcranial applications or inhomogeneous breast tissues. Also described are methods to compensate for motion in targeted abdominal organs such asmore » the liver or kidney. Douglas Christensen – Modeling for Breast and Brain HIFU Treatment Planning Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Tobias Preusser – TRANS-FUSIMO – An Integrative Approach to Model-Based Treatment Planning of Liver FUS Learning Objectives: Understand the role of acoustic beam modeling for predicting the effectiveness of therapeutic ultrasound treatments. Apply acoustic modeling to specific breast, liver, kidney and transcranial anatomies. Determine how to obtain appropriate acoustic modeling parameters from clinical images. Understand the separate role of absorption and scattering in energy delivery to tissues. See how organ motion can be compensated for in ultrasound therapies. Compare simulated data with clinical temperature measurements in transcranial applications. Supported by NIH R01 HL172787 and R01 EB013433 (DC); EU Seventh Framework Programme (FP7/2007-2013) under 270186 (FUSIMO) and 611889 (TRANS-FUSIMO)(TP); and P01 CA159992, GE, FUSF and InSightec (UV)« less

  17. Le management des projets scientifiques

    NASA Astrophysics Data System (ADS)

    Perrier, Françoise

    2000-12-01

    We describe in this paper a new approach for the management of scientific projects. This approach is the result of a long reflexion carried out within the MQDP (Methodology and Quality in the Project Development) group of INSU-CNRS, and continued with Guy Serra. Our reflexion was initiated with the study of the so-called `North-American Paradigm' which was, initially considered as the only relevant management model. Through our active participation in several astrophysical projects we realized that this model could not be applied to our laboratories without major modifications. Therefore, step-by-step, we have constructed our own methodology, using to the fullest human potential resources existing in our research field, their habits and skills. We have also participated in various working groups in industrial and scientific organisms for the benefits of CNRS. The management model presented here is based on a systemic and complex approach. This approach lets us describe the multiple aspects of a scientific project specially taking into account the human dimension. The project system model includes three major interconnected systems, immersed within an influencing and influenced environment: the `System to be Realized' which defines scientific and technical tasks leading to the scientific goals, the `Realizing System' which describes procedures, processes and organization, and the `Actors' System' which implements and boosts all the processes. Each one exists only through a series of successive models, elaborated at predefined dates of the project called `key-points'. These systems evolve with time and under often-unpredictable circumstances and the models have to take it into account. At these key-points, each model is compared to reality and the difference between the predicted and realized tasks is evaluated in order to define the data for the next model. This model can be applied to any kind of projects.

  18. Turbulent Convection in an Anelastic Rotating Sphere: A Model for the Circulation on the Giant Planets

    DTIC Science & Technology

    2008-06-01

    exterior weather layer, using a quasigeostrophic two layer channel model on a beta plane , where the colum- nar interior is therefore represented by a...116 5.4 The evolution of the ’it’ field in the weakly nonlinear run ........ .. 117 5.5 The zonal mean zonal velocity on the equatorial plane in...turbulence on a 8 plane . These two approaches have been in debate ever since. 1.3.1 Shallow Models The first to apply the "shallow" approach to

  19. Space-time latent component modeling of geo-referenced health data.

    PubMed

    Lawson, Andrew B; Song, Hae-Ryoung; Cai, Bo; Hossain, Md Monir; Huang, Kun

    2010-08-30

    Latent structure models have been proposed in many applications. For space-time health data it is often important to be able to find the underlying trends in time, which are supported by subsets of small areas. Latent structure modeling is one such approach to this analysis. This paper presents a mixture-based approach that can be applied to component selection. The analysis of a Georgia ambulatory asthma county-level data set is presented and a simulation-based evaluation is made. Copyright (c) 2010 John Wiley & Sons, Ltd.

  20. Mass spectrometry-based protein identification by integrating de novo sequencing with database searching.

    PubMed

    Wang, Penghao; Wilson, Susan R

    2013-01-01

    Mass spectrometry-based protein identification is a very challenging task. The main identification approaches include de novo sequencing and database searching. Both approaches have shortcomings, so an integrative approach has been developed. The integrative approach firstly infers partial peptide sequences, known as tags, directly from tandem spectra through de novo sequencing, and then puts these sequences into a database search to see if a close peptide match can be found. However the current implementation of this integrative approach has several limitations. Firstly, simplistic de novo sequencing is applied and only very short sequence tags are used. Secondly, most integrative methods apply an algorithm similar to BLAST to search for exact sequence matches and do not accommodate sequence errors well. Thirdly, by applying these methods the integrated de novo sequencing makes a limited contribution to the scoring model which is still largely based on database searching. We have developed a new integrative protein identification method which can integrate de novo sequencing more efficiently into database searching. Evaluated on large real datasets, our method outperforms popular identification methods.

  1. Time-lapse joint inversion of geophysical data with automatic joint constraints and dynamic attributes

    NASA Astrophysics Data System (ADS)

    Rittgers, J. B.; Revil, A.; Mooney, M. A.; Karaoulis, M.; Wodajo, L.; Hickey, C. J.

    2016-12-01

    Joint inversion and time-lapse inversion techniques of geophysical data are often implemented in an attempt to improve imaging of complex subsurface structures and dynamic processes by minimizing negative effects of random and uncorrelated spatial and temporal noise in the data. We focus on the structural cross-gradient (SCG) approach (enforcing recovered models to exhibit similar spatial structures) in combination with time-lapse inversion constraints applied to surface-based electrical resistivity and seismic traveltime refraction data. The combination of both techniques is justified by the underlying petrophysical models. We investigate the benefits and trade-offs of SCG and time-lapse constraints. Using a synthetic case study, we show that a combined joint time-lapse inversion approach provides an overall improvement in final recovered models. Additionally, we introduce a new approach to reweighting SCG constraints based on an iteratively updated normalized ratio of model sensitivity distributions at each time-step. We refer to the new technique as the Automatic Joint Constraints (AJC) approach. The relevance of the new joint time-lapse inversion process is demonstrated on the synthetic example. Then, these approaches are applied to real time-lapse monitoring field data collected during a quarter-scale earthen embankment induced-piping failure test. The use of time-lapse joint inversion is justified by the fact that a change of porosity drives concomitant changes in seismic velocities (through its effect on the bulk and shear moduli) and resistivities (through its influence upon the formation factor). Combined with the definition of attributes (i.e. specific characteristics) of the evolving target associated with piping, our approach allows localizing the position of the preferential flow path associated with internal erosion. This is not the case using other approaches.

  2. Profile-Based LC-MS Data Alignment—A Bayesian Approach

    PubMed Central

    Tsai, Tsung-Heng; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2014-01-01

    A Bayesian alignment model (BAM) is proposed for alignment of liquid chromatography-mass spectrometry (LC-MS) data. BAM belongs to the category of profile-based approaches, which are composed of two major components: a prototype function and a set of mapping functions. Appropriate estimation of these functions is crucial for good alignment results. BAM uses Markov chain Monte Carlo (MCMC) methods to draw inference on the model parameters and improves on existing MCMC-based alignment methods through 1) the implementation of an efficient MCMC sampler and 2) an adaptive selection of knots. A block Metropolis-Hastings algorithm that mitigates the problem of the MCMC sampler getting stuck at local modes of the posterior distribution is used for the update of the mapping function coefficients. In addition, a stochastic search variable selection (SSVS) methodology is used to determine the number and positions of knots. We applied BAM to a simulated data set, an LC-MS proteomic data set, and two LC-MS metabolomic data sets, and compared its performance with the Bayesian hierarchical curve registration (BHCR) model, the dynamic time-warping (DTW) model, and the continuous profile model (CPM). The advantage of applying appropriate profile-based retention time correction prior to performing a feature-based approach is also demonstrated through the metabolomic data sets. PMID:23929872

  3. Analytical and computational modeling of early penetration of non-enveloped icosahedral viruses into cells.

    PubMed

    Katzengold, Rona; Zaharov, Evgeniya; Gefen, Amit

    2016-07-27

    As obligate intracellular parasites, all viruses penetrate target cells to initiate replication and infection. This study introduces two approaches for evaluating the contact loads applied to a cell during early penetration of non-enveloped icosahedral viruses. The first approach is analytical modeling which is based on Hertz's theory for the contact of two elastic bodies; here we model the virus capsid as a triangle and the cell as an order-of-magnitude larger sphere. The second approach is finite element modeling, where we simulate three types of viruses: adeno-, papilloma- and polio- viruses, each interacting with a cell section. We find that the peak contact pressures and forces generated at the initial virus-cell contact depend on the virus geometry - that is both size and shape. With respect to shape, we show that the icosahedral virus shape induces greater peak pressures compared to a spherical virus shape. With respect to size, it is shown that the larger the virus is the greater are the contact loads in the attacked cell. Utilization of our modeling can be substantially useful not only for basic science studies, but also in other, more applied fields, such as in the field of gene therapy, or in `phage' virus studies.

  4. A statistical approach to evaluate flood risk at the regional level: an application to Italy

    NASA Astrophysics Data System (ADS)

    Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea

    2016-04-01

    Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate flood risk statistical characterization, the proposed procedure could be applied straightforward outside the national borders, particularly in areas with similar geo-environmental settings.

  5. Hidden Markov models of biological primary sequence information.

    PubMed Central

    Baldi, P; Chauvin, Y; Hunkapiller, T; McClure, M A

    1994-01-01

    Hidden Markov model (HMM) techniques are used to model families of biological sequences. A smooth and convergent algorithm is introduced to iteratively adapt the transition and emission parameters of the models from the examples in a given family. The HMM approach is applied to three protein families: globins, immunoglobulins, and kinases. In all cases, the models derived capture the important statistical characteristics of the family and can be used for a number of tasks, including multiple alignments, motif detection, and classification. For K sequences of average length N, this approach yields an effective multiple-alignment algorithm which requires O(KN2) operations, linear in the number of sequences. PMID:8302831

  6. Multiphase-field model of small strain elasto-plasticity according to the mechanical jump conditions

    NASA Astrophysics Data System (ADS)

    Herrmann, Christoph; Schoof, Ephraim; Schneider, Daniel; Schwab, Felix; Reiter, Andreas; Selzer, Michael; Nestler, Britta

    2018-04-01

    We introduce a small strain elasto-plastic multiphase-field model according to the mechanical jump conditions. A rate-independent J_2 -plasticity model with linear isotropic hardening and without kinematic hardening is applied exemplary. Generally, any physically nonlinear mechanical model is compatible with the subsequently presented procedure. In contrast to models with interpolated material parameters, the proposed model is able to apply different nonlinear mechanical constitutive equations for each phase separately. The Hadamard compatibility condition and the static force balance are employed as homogenization approaches to calculate the phase-inherent stresses and strains. Several verification cases are discussed. The applicability of the proposed model is demonstrated by simulations of the martensitic transformation and quantitative parameters.

  7. The road plan model: Information model for planning road building activities

    NASA Technical Reports Server (NTRS)

    Azinhal, Rafaela K.; Moura-Pires, Fernando

    1994-01-01

    The general building contractor is presented with an information model as an approach for deriving a high-level work plan of construction activities applied to road building. Road construction activities are represented in a Road Plan Model (RPM), which is modeled in the ISO standard STEP/EXPRESS and adopts various concepts from the GARM notation. The integration with the preceding road design stage and the succeeding phase of resource scheduling is discussed within the framework of a Road Construction Model. Construction knowledge is applied to the road design and the terrain model of the surrounding road infrastructure for the instantiation of the RPM. Issues regarding the implementation of a road planner application supporting the RPM are discussed.

  8. Using Decision Trees for Estimating Mode Choice of Trips in Buca-Izmir

    NASA Astrophysics Data System (ADS)

    Oral, L. O.; Tecim, V.

    2013-05-01

    Decision makers develop transportation plans and models for providing sustainable transport systems in urban areas. Mode Choice is one of the stages in transportation modelling. Data mining techniques can discover factors affecting the mode choice. These techniques can be applied with knowledge process approach. In this study a data mining process model is applied to determine the factors affecting the mode choice with decision trees techniques by considering individual trip behaviours from household survey data collected within Izmir Transportation Master Plan. From this perspective transport mode choice problem is solved on a case in district of Buca-Izmir, Turkey with CRISP-DM knowledge process model.

  9. Some Simple Formulas for Posterior Convergence Rates

    PubMed Central

    2014-01-01

    We derive some simple relations that demonstrate how the posterior convergence rate is related to two driving factors: a “penalized divergence” of the prior, which measures the ability of the prior distribution to propose a nonnegligible set of working models to approximate the true model and a “norm complexity” of the prior, which measures the complexity of the prior support, weighted by the prior probability masses. These formulas are explicit and involve no essential assumptions and are easy to apply. We apply this approach to the case with model averaging and derive some useful oracle inequalities that can optimize the performance adaptively without knowing the true model. PMID:27379278

  10. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  11. Copula-based nonlinear modeling of the law of one price for lumber products

    Treesearch

    Barry K. Goodwin; Matthew T. Holt; Gülcan Önel; Jeffrey P. Prestemon

    2018-01-01

    This paper proposes an alternative and potentially novel approach to analyzing the law of one price in a nonlinear fashion. Copula-based models that consider the joint distribution of prices separated by space are developed and applied to weekly...

  12. A Systems Approach to Research in Vocational Education.

    ERIC Educational Resources Information Center

    Miller, Larry E.

    1991-01-01

    A methodology to address "soft system" problems (those that are unstructured or fuzzy) has these steps: (1) mapping the problem; (2) constructing a root definition; (3) applying conceptual models; (4) comparing models to the real world; and (5) finding and implementing feasible solutions. (SK)

  13. Model selection and Bayesian inference for high-resolution seabed reflection inversion.

    PubMed

    Dettmer, Jan; Dosso, Stan E; Holland, Charles W

    2009-02-01

    This paper applies Bayesian inference, including model selection and posterior parameter inference, to inversion of seabed reflection data to resolve sediment structure at a spatial scale below the pulse length of the acoustic source. A practical approach to model selection is used, employing the Bayesian information criterion to decide on the number of sediment layers needed to sufficiently fit the data while satisfying parsimony to avoid overparametrization. Posterior parameter inference is carried out using an efficient Metropolis-Hastings algorithm for high-dimensional models, and results are presented as marginal-probability depth distributions for sound velocity, density, and attenuation. The approach is applied to plane-wave reflection-coefficient inversion of single-bounce data collected on the Malta Plateau, Mediterranean Sea, which indicate complex fine structure close to the water-sediment interface. This fine structure is resolved in the geoacoustic inversion results in terms of four layers within the upper meter of sediments. The inversion results are in good agreement with parameter estimates from a gravity core taken at the experiment site.

  14. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  15. Strong Local-Nonlocal Coupling for Integrated Fracture Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Littlewood, David John; Silling, Stewart A.; Mitchell, John A.

    Peridynamics, a nonlocal extension of continuum mechanics, is unique in its ability to capture pervasive material failure. Its use in the majority of system-level analyses carried out at Sandia, however, is severely limited, due in large part to computational expense and the challenge posed by the imposition of nonlocal boundary conditions. Combined analyses in which peridynamics is em- ployed only in regions susceptible to material failure are therefore highly desirable, yet available coupling strategies have remained severely limited. This report is a summary of the Laboratory Directed Research and Development (LDRD) project "Strong Local-Nonlocal Coupling for Inte- grated Fracture Modeling,"more » completed within the Computing and Information Sciences (CIS) In- vestment Area at Sandia National Laboratories. A number of challenges inherent to coupling local and nonlocal models are addressed. A primary result is the extension of peridynamics to facilitate a variable nonlocal length scale. This approach, termed the peridynamic partial stress, can greatly reduce the mathematical incompatibility between local and nonlocal equations through reduction of the peridynamic horizon in the vicinity of a model interface. A second result is the formulation of a blending-based coupling approach that may be applied either as the primary coupling strategy, or in combination with the peridynamic partial stress. This blending-based approach is distinct from general blending methods, such as the Arlequin approach, in that it is specific to the coupling of peridynamics and classical continuum mechanics. Facilitating the coupling of peridynamics and classical continuum mechanics has also required innovations aimed directly at peridynamic models. Specifically, the properties of peridynamic constitutive models near domain boundaries and shortcomings in available discretization strategies have been addressed. The results are a class of position-aware peridynamic constitutive laws for dramatically improved consistency at domain boundaries, and an enhancement to the meshfree discretization applied to peridynamic models that removes irregularities at the limit of the nonlocal length scale and dramatically improves conver- gence behavior. Finally, a novel approach for modeling ductile failure has been developed, moti- vated by the desire to apply coupled local-nonlocal models to a wide variety of materials, including ductile metals, which have received minimal attention in the peridynamic literature. Software im- plementation of the partial-stress coupling strategy, the position-aware peridynamic constitutive models, and the strategies for improving the convergence behavior of peridynamic models was completed within the Peridigm and Albany codes, developed at Sandia National Laboratories and made publicly available under the open-source 3-clause BSD license.« less

  16. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  17. Molecular modeling: An open invitation for applied mathematics

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2013-10-01

    Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.

  18. On the Scattering of the Electron off the Hydrogen Atom and the Helium Ion Below and Above the Ionization Threshold: Temkin-Poet Model

    NASA Astrophysics Data System (ADS)

    Yarevsky, E.; Yakovlev, S. L.; Elander, N.; Volkov, M. V.

    2014-08-01

    We generalize here the splitting approach to the long range (Coulomb) interaction for the three body scattering problem. With this approach, the exterior complex rotation technique can be applied for systems with asymptotic Coulomb interaction. We illustrate the method with calculations of the electron scattering on the hydrogen atom and positive helium ion in the frame of the Temkin-Poet model.

  19. Algebraic approach to small-world network models

    NASA Astrophysics Data System (ADS)

    Rudolph-Lilith, Michelle; Muller, Lyle E.

    2014-01-01

    We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid nonasymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.

  20. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  1. Assessment of the viscoelastic mechanical properties of polycarbonate urethane for medical devices.

    PubMed

    Beckmann, Agnes; Heider, Yousef; Stoffel, Marcus; Markert, Bernd

    2018-06-01

    The underlying research work introduces a study of the mechanical properties of polycarbonate urethane (PCU), used in the construction of various medical devices. This comprises the discussion of a suitable material model, the application of elemental experiments to identify the related parameters and the numerical simulation of the applied experiments in order to calibrate and validate the mathematical model. In particular, the model of choice for the simulation of PCU response is the non-linear viscoelastic Bergström-Boyce material model, applied in the finite-element (FE) package Abaqus®. For the parameter identification, uniaxial tension and unconfined compression tests under in-laboratory physiological conditions were carried out. The geometry of the samples together with the applied loadings were simulated in Abaqus®, to insure the suitability of the modelling approach. The obtained parameters show a very good agreement between the numerical and the experimental results. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. An improved advertising CTR prediction approach based on the fuzzy deep neural network

    PubMed Central

    Gao, Shu; Li, Mingjiang

    2018-01-01

    Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise. PMID:29727443

  3. An improved advertising CTR prediction approach based on the fuzzy deep neural network.

    PubMed

    Jiang, Zilong; Gao, Shu; Li, Mingjiang

    2018-01-01

    Combining a deep neural network with fuzzy theory, this paper proposes an advertising click-through rate (CTR) prediction approach based on a fuzzy deep neural network (FDNN). In this approach, fuzzy Gaussian-Bernoulli restricted Boltzmann machine (FGBRBM) is first applied to input raw data from advertising datasets. Next, fuzzy restricted Boltzmann machine (FRBM) is used to construct the fuzzy deep belief network (FDBN) with the unsupervised method layer by layer. Finally, fuzzy logistic regression (FLR) is utilized for modeling the CTR. The experimental results show that the proposed FDNN model outperforms several baseline models in terms of both data representation capability and robustness in advertising click log datasets with noise.

  4. Modeling of polymer networks for application to solid propellant formulating

    NASA Technical Reports Server (NTRS)

    Marsh, H. E.

    1979-01-01

    Methods for predicting the network structural characteristics formed by the curing of pourable elastomers were presented; as well as the logic which was applied in the development of mathematical models. A universal approach for modeling was developed and verified by comparison with other methods in application to a complex system. Several applications of network models to practical problems are described.

  5. Scattering of Acoustic Waves from Ocean Boundaries

    DTIC Science & Technology

    2015-09-30

    of buried mines and improve SONAR performance in shallow water. OBJECTIVES 1) Determination of the correct physical model of acoustic propagation... acoustic parameters in the ocean. APPROACH 1) Finite Element Modeling for Range Dependent Waveguides: Finite element modeling is applied to a...roughness measurements for reverberation modeling . GLISTEN data provide insight into the role of biology on acoustic propagation and scattering

  6. Spatial Double Generalized Beta Regression Models: Extensions and Application to Study Quality of Education in Colombia

    ERIC Educational Resources Information Center

    Cepeda-Cuervo, Edilberto; Núñez-Antón, Vicente

    2013-01-01

    In this article, a proposed Bayesian extension of the generalized beta spatial regression models is applied to the analysis of the quality of education in Colombia. We briefly revise the beta distribution and describe the joint modeling approach for the mean and dispersion parameters in the spatial regression models' setting. Finally, we motivate…

  7. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  8. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  9. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  10. Integrated water flow model and modflow-farm process: A comparison of theory, approaches, and features of two integrated hydrologic models

    USGS Publications Warehouse

    Dogrul, Emin C.; Schmid, Wolfgang; Hanson, Randall T.; Kadir, Tariq; Chung, Francis

    2016-01-01

    Effective modeling of conjunctive use of surface and subsurface water resources requires simulation of land use-based root zone and surface flow processes as well as groundwater flows, streamflows, and their interactions. Recently, two computer models developed for this purpose, the Integrated Water Flow Model (IWFM) from the California Department of Water Resources and the MODFLOW with Farm Process (MF-FMP) from the US Geological Survey, have been applied to complex basins such as the Central Valley of California. As both IWFM and MFFMP are publicly available for download and can be applied to other basins, there is a need to objectively compare the main approaches and features used in both models. This paper compares the concepts, as well as the method and simulation features of each hydrologic model pertaining to groundwater, surface water, and landscape processes. The comparison is focused on the integrated simulation of water demand and supply, water use, and the flow between coupled hydrologic processes. The differences in the capabilities and features of these two models could affect the outcome and types of water resource problems that can be simulated.

  11. Estimating tuberculosis incidence from primary survey data: a mathematical modeling approach.

    PubMed

    Pandey, S; Chadha, V K; Laxminarayan, R; Arinaminpathy, N

    2017-04-01

    There is an urgent need for improved estimations of the burden of tuberculosis (TB). To develop a new quantitative method based on mathematical modelling, and to demonstrate its application to TB in India. We developed a simple model of TB transmission dynamics to estimate the annual incidence of TB disease from the annual risk of tuberculous infection and prevalence of smear-positive TB. We first compared model estimates for annual infections per smear-positive TB case using previous empirical estimates from China, Korea and the Philippines. We then applied the model to estimate TB incidence in India, stratified by urban and rural settings. Study model estimates show agreement with previous empirical estimates. Applied to India, the model suggests an annual incidence of smear-positive TB of 89.8 per 100 000 population (95%CI 56.8-156.3). Results show differences in urban and rural TB: while an urban TB case infects more individuals per year, a rural TB case remains infectious for appreciably longer, suggesting the need for interventions tailored to these different settings. Simple models of TB transmission, in conjunction with necessary data, can offer approaches to burden estimation that complement those currently being used.

  12. Multiscale Cloud System Modeling

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Moncrieff, Mitchell W.

    2009-01-01

    The central theme of this paper is to describe how cloud system resolving models (CRMs) of grid spacing approximately 1 km have been applied to various important problems in atmospheric science across a wide range of spatial and temporal scales and how these applications relate to other modeling approaches. A long-standing problem concerns the representation of organized precipitating convective cloud systems in weather and climate models. Since CRMs resolve the mesoscale to large scales of motion (i.e., 10 km to global) they explicitly address the cloud system problem. By explicitly representing organized convection, CRMs bypass restrictive assumptions associated with convective parameterization such as the scale gap between cumulus and large-scale motion. Dynamical models provide insight into the physical mechanisms involved with scale interaction and convective organization. Multiscale CRMs simulate convective cloud systems in computational domains up to global and have been applied in place of contemporary convective parameterizations in global models. Multiscale CRMs pose a new challenge for model validation, which is met in an integrated approach involving CRMs, operational prediction systems, observational measurements, and dynamical models in a new international project: the Year of Tropical Convection, which has an emphasis on organized tropical convection and its global effects.

  13. Ionospheric effects in uncalibrated phase delay estimation and ambiguity-fixed PPP based on raw observable model

    NASA Astrophysics Data System (ADS)

    Gu, Shengfeng; Shi, Chuang; Lou, Yidong; Liu, Jingnan

    2015-05-01

    Zero-difference (ZD) ambiguity resolution (AR) reveals the potential to further improve the performance of precise point positioning (PPP). Traditionally, PPP AR is achieved by Melbourne-Wübbena and ionosphere-free combinations in which the ionosphere effect are removed. To exploit the ionosphere characteristics, PPP AR with L1 and L2 raw observable has also been developed recently. In this study, we apply this new approach in uncalibrated phase delay (UPD) generation and ZD AR and compare it with the traditional model. The raw observable processing strategy treats each ionosphere delay as an unknown parameter. In this manner, both a priori ionosphere correction model and its spatio-temporal correlation can be employed as constraints to improve the ambiguity resolution. However, theoretical analysis indicates that for the wide-lane (WL) UPD retrieved from L1/L2 ambiguities to benefit from this raw observable approach, high precision ionosphere correction of better than 0.7 total electron content unit (TECU) is essential. This conclusion is then confirmed with over 1 year data collected at about 360 stations. Firstly, both global and regional ionosphere model were generated and evaluated, the results of which demonstrated that, for large-scale ionosphere modeling, only an accuracy of 3.9 TECU can be achieved on average for the vertical delays, and this accuracy can be improved to about 0.64 TECU when dense network is involved. Based on these ionosphere products, WL/narrow-lane (NL) UPDs are then extracted with the raw observable model. The NL ambiguity reveals a better stability and consistency compared to traditional approach. Nonetheless, the WL ambiguity can be hardly improved even constrained with the high spatio-temporal resolution ionospheric corrections. By applying both these approaches in PPP-RTK, it is interesting to find that the traditional model is more efficient in AR as evidenced by the shorter time to first fix, while the three-dimensional positioning accuracy of the RAW model outperforms the combination model by about . This reveals that, with the current ionosphere models, there is actually no optimal strategy for the dual-frequency ZD ambiguity resolution, and the combination approach and raw approach each has merits and demerits.

  14. Ethical analysis in HTA of complex health interventions.

    PubMed

    Lysdahl, Kristin Bakke; Oortwijn, Wija; van der Wilt, Gert Jan; Refolo, Pietro; Sacchini, Dario; Mozygemba, Kati; Gerhardus, Ansgar; Brereton, Louise; Hofmann, Bjørn

    2016-03-22

    In the field of health technology assessment (HTA), there are several approaches that can be used for ethical analysis. However, there is a scarcity of literature that critically evaluates and compares the strength and weaknesses of these approaches when they are applied in practice. In this paper, we analyse the applicability of some selected approaches for addressing ethical issues in HTA in the field of complex health interventions. Complex health interventions have been the focus of methodological attention in HTA. However, the potential methodological challenges for ethical analysis are as yet unknown. Six of the most frequently described and applied ethical approaches in HTA were critically assessed against a set of five characteristics of complex health interventions: multiple and changing perspectives, indeterminate phenomena, uncertain causality, unpredictable outcomes, and ethical complexity. The assessments are based on literature and the authors' experiences of developing, applying and assessing the approaches. The Interactive, participatory HTA approach is by its nature and flexibility, applicable across most complexity characteristics. Wide Reflective Equilibrium is also flexible and its openness to different perspectives makes it better suited for complex health interventions than more rigid conventional approaches, such as Principlism and Casuistry. Approaches developed for HTA purposes are fairly applicable for complex health interventions, which one could expect because they include various ethical perspectives, such as the HTA Core Model® and the Socratic approach. This study shows how the applicability for addressing ethical issues in HTA of complex health interventions differs between the selected ethical approaches. Knowledge about these differences may be helpful when choosing and applying an approach for ethical analyses in HTA. We believe that the study contributes to increasing awareness and interest of the ethical aspects of complex health interventions in general.

  15. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  16. On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Parisio, Francesco; Laloui, Lyesse

    2018-02-01

    The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.

  17. Design of a Magnetostrictive-Hydraulic Actuator Considering Nonlinear System Dynamics and Fluid-Structure Coupling

    NASA Astrophysics Data System (ADS)

    Larson, John Philip

    Smart material electro-hydraulic actuators (EHAs) utilize fluid rectification via one-way check valves to amplify the small, high-frequency vibrations of certain smart materials into large motions of a hydraulic cylinder. Although the concept has been demonstrated in previously, the operating frequency of smart material EHA systems has been limited to a small fraction of the available bandwidth of the driver materials. The focus of this work is to characterize and model the mechanical performance of a magnetostrictive EHA considering key system components: rectification valves, smart material driver, and fluid-system components, leading to an improved actuator design relative to prior work. The one-way valves were modeled using 3-D finite element analysis, and their behavior was characterized experimentally by static and dynamic experimental measurement. Taking into account the effect of the fluid and mechanical conditions applied to the valves within the pump, the dynamic response of the valve was quantified and applied to determine rectification bandwidth of different valve configurations. A novel miniature reed valve, designed for a frequency response above 10~kHz, was fabricated and tested within a magnetostrictive EHA. The nonlinear response of the magnetostrictive driver, including saturation and hysteresis effects, was modeled using the Jiles-Atherton approach to calculate the magnetization and the resulting magnetostriction based on the applied field calculated within the rod from Maxwell's equations. The dynamic pressure response of the fluid system components (pumping chamber, hydraulic cylinder, and connecting passages) was measured over a range of input frequencies. For the magnetostrictive EHA tested, the peak performance frequency was found to be limited by the fluid resonances within the system. A lumped-parameter modeling approach was applied to model the overall behavior of a magnetostrictive EHA, incorporating models for the reed valve response, nonlinear magnetostrictive behavior, and fluid behavior (including inertia and compliance). This model was validated by experimental study of a magnetostrictive EHA with a reduced volume manifold. The model was subsequently applied to design a compact magnetostrictive EHA for aircraft applications. Testing of the system shows that the output performance increases with frequency up to a peak unloaded flow rate of 100 cm3/s (6.4 cu in/s) at 1200 Hz, which is a 100% to 500% increase over previous state-of-the-art systems. A blocked differential pressure of 12.1 MPa (1750 psi) was measured, resulting in a power capacity of 310 W, more than 100 W higher than previously reported values. The design and modeling approach used to scale up the performance to create a compact aircraft EHA can also be applied to reduce the size and weight of smart material EHAs for lower power level applications.

  18. Microarray-based cancer prediction using soft computing approach.

    PubMed

    Wang, Xiaosheng; Gotoh, Osamu

    2009-05-26

    One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.

  19. Tennis: Applied Examples of a Game-Based Teaching Approach

    ERIC Educational Resources Information Center

    Crespo, Miguel; Reid, Machar M.; Miley, Dave

    2004-01-01

    In this article, the authors reveal that tennis has been increasingly taught with a tactical model or game-based approach, which emphasizes learning through practice in match-like drills and actual play, rather than in practicing strokes for exact technical execution. Its goal is to facilitate the player's understanding of the tactical, physical…

  20. Managing a Modern University: Is It Time for a Rethink?

    ERIC Educational Resources Information Center

    Kenny, John Daniel

    2009-01-01

    The corporate approaches introduced in the late 1980s and now prevalent in universities in Australia have led to irrevocable changes in the way universities are managed and academics work. The management approaches widely applied in Australian universities are largely based on a top-down corporate management model, with central control over policy…

  1. Student Learning and Engagement in the Context of Curriculum Integration

    ERIC Educational Resources Information Center

    Brinegar, Kathleen; Bishop, Penny A.

    2011-01-01

    Although curriculum integration has a long history of myriad models, rarely have those stakeholders most connected to the practice--the students--been consulted about the efficacy of the approach. This study applied a longitudinal, intrinsic case study approach (Stake, 2000) to examine middle school students' perceptions of learning and engagement…

  2. An Approach Based on Social Network Analysis Applied to a Collaborative Learning Experience

    ERIC Educational Resources Information Center

    Claros, Iván; Cobos, Ruth; Collazos, César A.

    2016-01-01

    The Social Network Analysis (SNA) techniques allow modelling and analysing the interaction among individuals based on their attributes and relationships. This approach has been used by several researchers in order to measure the social processes in collaborative learning experiences. But oftentimes such measures were calculated at the final state…

  3. An Applied Local Sustainable Energy Model: The Case of Austin, Texas

    ERIC Educational Resources Information Center

    Hughes, Kristen

    2009-01-01

    Climate change is only one factor driving growing numbers of cities throughout the globe to reconsider conventional approaches to electricity generation and use. In the U.S., this momentum is incorporating a shift away from centralized, supply-side approaches reliant on fossil fuels and nuclear power, toward more distributed, flexible, and cleaner…

  4. A Dual Approach to Fostering Under-Prepared Student Success: Focusing on Doing and Becoming

    ERIC Educational Resources Information Center

    Shaffer, Suzanne C.; Eshbach, Barbara E.; Santiago-Blay, Jorge A.

    2015-01-01

    A paired course model for under-prepared college students incorporates a dual instructional approach, academic skill building and lifelong learning development, to help students do more academically and become stronger lifelong learners. In a reading support course, students improved their reading skills and applied them directly to the paired…

  5. PBL-SEE: An Authentic Assessment Model for PBL-Based Software Engineering Education

    ERIC Educational Resources Information Center

    dos Santos, Simone C.

    2017-01-01

    The problem-based learning (PBL) approach has been successfully applied to teaching software engineering thanks to its principles of group work, learning by solving real problems, and learning environments that match the market realities. However, the lack of well-defined methodologies and processes for implementing the PBL approach represents a…

  6. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department.

    PubMed

    Kittipittayakorn, Cholada; Ying, Kuo-Ching

    2016-01-01

    Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department.

  7. Using the Integration of Discrete Event and Agent-Based Simulation to Enhance Outpatient Service Quality in an Orthopedic Department

    PubMed Central

    Kittipittayakorn, Cholada

    2016-01-01

    Many hospitals are currently paying more attention to patient satisfaction since it is an important service quality index. Many Asian countries' healthcare systems have a mixed-type registration, accepting both walk-in patients and scheduled patients. This complex registration system causes a long patient waiting time in outpatient clinics. Different approaches have been proposed to reduce the waiting time. This study uses the integration of discrete event simulation (DES) and agent-based simulation (ABS) to improve patient waiting time and is the first attempt to apply this approach to solve this key problem faced by orthopedic departments. From the data collected, patient behaviors are modeled and incorporated into a massive agent-based simulation. The proposed approach is an aid for analyzing and modifying orthopedic department processes, allows us to consider far more details, and provides more reliable results. After applying the proposed approach, the total waiting time of the orthopedic department fell from 1246.39 minutes to 847.21 minutes. Thus, using the correct simulation model significantly reduces patient waiting time in an orthopedic department. PMID:27195606

  8. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  9. Applying Digital Sensor Technology: A Problem-Solving Approach

    ERIC Educational Resources Information Center

    Seedhouse, Paul; Knight, Dawn

    2016-01-01

    There is currently an explosion in the number and range of new devices coming onto the technology market that use digital sensor technology to track aspects of human behaviour. In this article, we present and exemplify a three-stage model for the application of digital sensor technology in applied linguistics that we have developed, namely,…

  10. Analysis of tribological behaviour of zirconia reinforced Al-SiC hybrid composites using statistical and artificial neural network technique

    NASA Astrophysics Data System (ADS)

    Arif, Sajjad; Tanwir Alam, Md; Ansari, Akhter H.; Bilal Naim Shaikh, Mohd; Arif Siddiqui, M.

    2018-05-01

    The tribological performance of aluminium hybrid composites reinforced with micro SiC (5 wt%) and nano zirconia (0, 3, 6 and 9 wt%) fabricated through powder metallurgy technique were investigated using statistical and artificial neural network (ANN) approach. The influence of zirconia reinforcement, sliding distance and applied load were analyzed with test based on full factorial design of experiments. Analysis of variance (ANOVA) was used to evaluate the percentage contribution of each process parameters on wear loss. ANOVA approach suggested that wear loss be mainly influenced by sliding distance followed by zirconia reinforcement and applied load. Further, a feed forward back propagation neural network was applied on input/output date for predicting and analyzing the wear behaviour of fabricated composite. A very close correlation between experimental and ANN output were achieved by implementing the model. Finally, ANN model was effectively used to find the influence of various control factors on wear behaviour of hybrid composites.

  11. An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS

    PubMed Central

    Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu

    2015-01-01

    With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller. PMID:26690154

  12. Are your covariates under control? How normalization can re-introduce covariate effects.

    PubMed

    Pain, Oliver; Dudbridge, Frank; Ronald, Angelica

    2018-04-30

    Many statistical tests rely on the assumption that the residuals of a model are normally distributed. Rank-based inverse normal transformation (INT) of the dependent variable is one of the most popular approaches to satisfy the normality assumption. When covariates are included in the analysis, a common approach is to first adjust for the covariates and then normalize the residuals. This study investigated the effect of regressing covariates against the dependent variable and then applying rank-based INT to the residuals. The correlation between the dependent variable and covariates at each stage of processing was assessed. An alternative approach was tested in which rank-based INT was applied to the dependent variable before regressing covariates. Analyses based on both simulated and real data examples demonstrated that applying rank-based INT to the dependent variable residuals after regressing out covariates re-introduces a linear correlation between the dependent variable and covariates, increasing type-I errors and reducing power. On the other hand, when rank-based INT was applied prior to controlling for covariate effects, residuals were normally distributed and linearly uncorrelated with covariates. This latter approach is therefore recommended in situations were normality of the dependent variable is required.

  13. The 4D Nucleome Project

    PubMed Central

    Dekker, Job; Belmont, Andrew S.; Guttman, Mitchell; Leshyk, Victor O.; Lis, John T.; Lomvardas, Stavros; Mirny, Leonid A.; O’Shea, Clodagh C.; Park, Peter J.; Ren, Bing; Ritland Politz, Joan C.; Shendure, Jay; Zhong, Sheng

    2017-01-01

    Preface The 4D Nucleome Network aims to develop and apply approaches to map the structure and dynamics of the human and mouse genomes in space and time with the goal of gaining deeper mechanistic understanding of how the nucleus is organized and functions. The project will develop and benchmark experimental and computational approaches for measuring genome conformation and nuclear organization, and investigate how these contribute to gene regulation and other genome functions. Validated experimental approaches will be combined with biophysical modeling to generate quantitative models of spatial genome organization in different biological states, both in cell populations and in single cells. PMID:28905911

  14. On Complexities of Impact Simulation of Fiber Reinforced Polymer Composites: A Simplified Modeling Framework

    PubMed Central

    Alemi-Ardakani, M.; Milani, A. S.; Yannacopoulos, S.

    2014-01-01

    Impact modeling of fiber reinforced polymer composites is a complex and challenging task, in particular for practitioners with less experience in advanced coding and user-defined subroutines. Different numerical algorithms have been developed over the past decades for impact modeling of composites, yet a considerable gap often exists between predicted and experimental observations. In this paper, after a review of reported sources of complexities in impact modeling of fiber reinforced polymer composites, two simplified approaches are presented for fast simulation of out-of-plane impact response of these materials considering four main effects: (a) strain rate dependency of the mechanical properties, (b) difference between tensile and flexural bending responses, (c) delamination, and (d) the geometry of fixture (clamping conditions). In the first approach, it is shown that by applying correction factors to the quasistatic material properties, which are often readily available from material datasheets, the role of these four sources in modeling impact response of a given composite may be accounted for. As a result a rough estimation of the dynamic force response of the composite can be attained. To show the application of the approach, a twill woven polypropylene/glass reinforced thermoplastic composite laminate has been tested under 200 J impact energy and was modeled in Abaqus/Explicit via the built-in Hashin damage criteria. X-ray microtomography was used to investigate the presence of delamination inside the impacted sample. Finally, as a second and much simpler modeling approach it is shown that applying only a single correction factor over all material properties at once can still yield a reasonable prediction. Both advantages and limitations of the simplified modeling framework are addressed in the performed case study. PMID:25431787

  15. Fine-Scale Mapping by Spatial Risk Distribution Modeling for Regional Malaria Endemicity and Its Implications under the Low-to-Moderate Transmission Setting in Western Cambodia

    PubMed Central

    Okami, Suguru; Kohtake, Naohiko

    2016-01-01

    The disease burden of malaria has decreased as malaria elimination efforts progress. The mapping approach that uses spatial risk distribution modeling needs some adjustment and reinvestigation in accordance with situational changes. Here we applied a mathematical modeling approach for standardized morbidity ratio (SMR) calculated by annual parasite incidence using routinely aggregated surveillance reports, environmental data such as remote sensing data, and non-environmental anthropogenic data to create fine-scale spatial risk distribution maps of western Cambodia. Furthermore, we incorporated a combination of containment status indicators into the model to demonstrate spatial heterogeneities of the relationship between containment status and risks. The explanatory model was fitted to estimate the SMR of each area (adjusted Pearson correlation coefficient R2 = 0.774; Akaike information criterion AIC = 149.423). A Bayesian modeling framework was applied to estimate the uncertainty of the model and cross-scale predictions. Fine-scale maps were created by the spatial interpolation of estimated SMRs at each village. Compared with geocoded case data, corresponding predicted values showed conformity [Spearman’s rank correlation r = 0.662 in the inverse distance weighed interpolation and 0.645 in ordinal kriging (95% confidence intervals of 0.414–0.827 and 0.368–0.813, respectively), Welch’s t-test; Not significant]. The proposed approach successfully explained regional malaria risks and fine-scale risk maps were created under low-to-moderate malaria transmission settings where reinvestigations of existing risk modeling approaches were needed. Moreover, different representations of simulated outcomes of containment status indicators for respective areas provided useful insights for tailored interventional planning, considering regional malaria endemicity. PMID:27415623

  16. A combined triggering-propagation modeling approach for the assessment of rainfall induced debris flow susceptibility

    NASA Astrophysics Data System (ADS)

    Stancanelli, Laura Maria; Peres, David Johnny; Cancelliere, Antonino; Foti, Enrico

    2017-07-01

    Rainfall-induced shallow slides can evolve into debris flows that move rapidly downstream with devastating consequences. Mapping the susceptibility to debris flow is an important aid for risk mitigation. We propose a novel practical approach to derive debris flow inundation maps useful for susceptibility assessment, that is based on the integrated use of DEM-based spatially-distributed hydrological and slope stability models with debris flow propagation models. More specifically, the TRIGRS infiltration and infinite slope stability model and the FLO-2D model for the simulation of the related debris flow propagation and deposition are combined. An empirical instability-to-debris flow triggering threshold calibrated on the basis of observed events, is applied to link the two models and to accomplish the task of determining the amount of unstable mass that develops as a debris flow. Calibration of the proposed methodology is carried out based on real data of the debris flow event occurred on 1 October 2009, in the Peloritani mountains area (Italy). Model performance, assessed by receiver-operating-characteristics (ROC) indexes, evidences fairly good reproduction of the observed event. Comparison with the performance of the traditional debris flow modeling procedure, in which sediment and water hydrographs are inputed as lumped at selected points on top of the streams, is also performed, in order to assess quantitatively the limitations of such commonly applied approach. Results show that the proposed method, besides of being more process-consistent than the traditional hydrograph-based approach, can potentially provide a more accurate simulation of debris-flow phenomena, in terms of spatial patterns of erosion and deposition as well on the quantification of mobilized volumes and depths, avoiding overestimation of debris flow triggering volume and, thus, of maximum inundation flow depths.

  17. In search of best fitted composite model to the ALAE data set with transformed Gamma and inversed transformed Gamma families

    NASA Astrophysics Data System (ADS)

    Maghsoudi, Mastoureh; Bakar, Shaiful Anuar Abu

    2017-05-01

    In this paper, a recent novel approach is applied to estimate the threshold parameter of a composite model. Several composite models from Transformed Gamma and Inverse Transformed Gamma families are constructed based on this approach and their parameters are estimated by the maximum likelihood method. These composite models are fitted to allocated loss adjustment expenses (ALAE). In comparison to all composite models studied, the composite Weibull-Inverse Transformed Gamma model is proved to be a competitor candidate as it best fit the loss data. The final part considers the backtesting method to verify the validation of VaR and CTE risk measures.

  18. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  19. Modeling material interfaces with hybrid adhesion method

    DOE PAGES

    Brown, Nicholas Taylor; Qu, Jianmin; Martinez, Enrique

    2017-01-27

    A molecular dynamics simulation approach is presented to approximate layered material structures using discrete interatomic potentials through classical mechanics and the underlying principles of quantum mechanics. This method isolates the energetic contributions of the system into two pure material layers and an interfacial region used to simulate the adhesive properties of the diffused interface. The strength relationship of the adhesion contribution is calculated through small-scale separation calculations and applied to the molecular surfaces through an inter-layer bond criterion. By segregating the contributions into three regions and accounting for the interfacial excess energies through the adhesive surface bonds, it is possiblemore » to model each material with an independent potential while maintaining an acceptable level of accuracy in the calculation of mechanical properties. This method is intended for the atomistic study of the delamination mechanics, typically observed in thin-film applications. Therefore, the work presented in this paper focuses on mechanical tensile behaviors, with observations in the elastic modulus and the delamination failure mode. To introduce the hybrid adhesion method, we apply the approach to an ideal bulk copper sample, where an interface is created by disassociating the force potential in the middle of the structure. Various mechanical behaviors are compared to a standard EAM control model to demonstrate the adequacy of this approach in a simple setting. In addition, we demonstrate the robustness of this approach by applying it on (1) a Cu-Cu 2O interface with interactions between two atom types, and (2) an Al-Cu interface with two dissimilar FCC lattices. These additional examples are verified against EAM and COMB control models to demonstrate the accurate simulation of failure through delamination, and the formation and propagation of dislocations under loads. Finally, the results conclude that by modeling the energy contributions of an interface using hybrid adhesion bonds, we can provide an accurate approximation method for studies of large-scale mechanical properties, as well as the representation of various delamination phenomena at the atomic scale.« less

  20. Effect of bandage thickness on interface pressure applied by compression bandages.

    PubMed

    Al Khaburi, Jawad; Dehghani-Sanij, Abbas A; Nelson, E Andrea; Hutchinson, Jerry

    2012-04-01

    Medical compression bandages are widely used in the treatment of chronic venous disorder. In order to design effective compression bandages, researchers have attempted to describe the interface pressure applied by these bandages using mathematical models. This paper reports on the work carried out to derive the mathematical model used to describe the interface pressure applied by single-layer bandage using two different approaches. The first assumes that the bandage thickness is negligible, whereas the second model includes the bandage thickness. The estimated pressures using the two formulae are then compared, simulated over a 3D representation of a real leg and validated experimentally. Both theoretical and experimental results have shown that taking bandage thickness into consideration while estimating the pressures applied by a medical compression bandage will result in more accurate estimation. However, the additional accuracy is clinically insignificant. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  1. From basic physics to mechanisms of toxicity: the "liquid drop" approach applied to develop predictive classification models for toxicity of metal oxide nanoparticles.

    PubMed

    Sizochenko, Natalia; Rasulev, Bakhtiyor; Gajewicz, Agnieszka; Kuz'min, Victor; Puzyn, Tomasz; Leszczynski, Jerzy

    2014-11-21

    Many metal oxide nanoparticles are able to cause persistent stress to live organisms, including humans, when discharged to the environment. To understand the mechanism of metal oxide nanoparticles' toxicity and reduce the number of experiments, the development of predictive toxicity models is important. In this study, performed on a series of nanoparticles, the comparative quantitative-structure activity relationship (nano-QSAR) analyses of their toxicity towards E. coli and HaCaT cells were established. A new approach for representation of nanoparticles' structure is presented. For description of the supramolecular structure of nanoparticles the "liquid drop" model was applied. It is expected that a novel, proposed approach could be of general use for predictions related to nanomaterials. In addition, in our study fragmental simplex descriptors and several ligand-metal binding characteristics were calculated. The developed nano-QSAR models were validated and reliably predict the toxicity of all studied metal oxide nanoparticles. Based on the comparative analysis of contributed properties in both models the LDM-based descriptors were revealed to have an almost similar level of contribution to toxicity in both cases, while other parameters (van der Waals interactions, electronegativity and metal-ligand binding characteristics) have unequal contribution levels. In addition, the models developed here suggest different mechanisms of nanotoxicity for these two types of cells.

  2. Mental models: an alternative evaluation of a sensemaking approach to ethics instruction.

    PubMed

    Brock, Meagan E; Vert, Andrew; Kligyte, Vykinta; Waples, Ethan P; Sevier, Sydney T; Mumford, Michael D

    2008-09-01

    In spite of the wide variety of approaches to ethics training it is still debatable which approach has the highest potential to enhance professionals' integrity. The current effort assesses a novel curriculum that focuses on metacognitive reasoning strategies researchers use when making sense of day-to-day professional practices that have ethical implications. The evaluated trainings effectiveness was assessed by examining five key sensemaking processes, such as framing, emotion regulation, forecasting, self-reflection, and information integration that experts and novices apply in ethical decision-making. Mental models of trained and untrained graduate students, as well as faculty, working in the field of physical sciences were compared using a think-aloud protocol 6 months following the ethics training. Evaluation and comparison of the mental models of participants provided further validation evidence for sensemaking training. Specifically, it was found that trained students applied metacognitive reasoning strategies learned during training in their ethical decision-making that resulted in complex mental models focused on the objective assessment of the situation. Mental models of faculty and untrained students were externally-driven with a heavy focus on autobiographical processes. The study shows that sensemaking training has a potential to induce shifts in researchers' mental models by making them more cognitively complex via the use of metacognitive reasoning strategies. Furthermore, field experts may benefit from sensemaking training to improve their ethical decision-making framework in highly complex, novel, and ambiguous situations.

  3. A comparative modeling study of a dual tracer experiment in a large lysimeter under atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Stumpp, C.; Nützmann, G.; Maciejewski, S.; Maloszewski, P.

    2009-09-01

    SummaryIn this paper, five model approaches with different physical and mathematical concepts varying in their model complexity and requirements were applied to identify the transport processes in the unsaturated zone. The applicability of these model approaches were compared and evaluated investigating two tracer breakthrough curves (bromide, deuterium) in a cropped, free-draining lysimeter experiment under natural atmospheric boundary conditions. The data set consisted of time series of water balance, depth resolved water contents, pressure heads and resident concentrations measured during 800 days. The tracer transport parameters were determined using a simple stochastic (stream tube model), three lumped parameter (constant water content model, multi-flow dispersion model, variable flow dispersion model) and a transient model approach. All of them were able to fit the tracer breakthrough curves. The identified transport parameters of each model approach were compared. Despite the differing physical and mathematical concepts the resulting parameters (mean water contents, mean water flux, dispersivities) of the five model approaches were all in the same range. The results indicate that the flow processes are also describable assuming steady state conditions. Homogeneous matrix flow is dominant and a small pore volume with enhanced flow velocities near saturation was identified with variable saturation flow and transport approach. The multi-flow dispersion model also identified preferential flow and additionally suggested a third less mobile flow component. Due to high fitting accuracy and parameter similarity all model approaches indicated reliable results.

  4. A TRAINING MODEL FOR THE JOBLESS ADULT.

    ERIC Educational Resources Information Center

    ULRICH, BERNARD

    THE TRAINING SYSTEMS DESIGN, AN INTERDISCIPLINARY APPROACH UTILIZING KNOWLEDGE OF BEHAVIORAL SCIENCES, NEW INSTRUCTIONAL TECHNOLOGY, AND SYSTEMS DESIGN, HAS BEEN APPLIED TO DEVELOP A MODEL FOR RE-EDUCATING AND TRAINING THE AGING UNEMPLOYED. RESEARCH INTO EXISTING MDTA DEMONSTRATION PROGRAMS BY THE COOPERATIVE EFFORTS OF MCGRAW-HILL AND THE…

  5. A Model of Internal Communication in Adaptive Communication Systems.

    ERIC Educational Resources Information Center

    Williams, M. Lee

    A study identified and categorized different types of internal communication systems and developed an applied model of internal communication in adaptive organizational systems. Twenty-one large organizations were selected for their varied missions and diverse approaches to managing internal communication. Individual face-to-face or telephone…

  6. The Bayesian Revolution Approaches Psychological Development

    ERIC Educational Resources Information Center

    Shultz, Thomas R.

    2007-01-01

    This commentary reviews five articles that apply Bayesian ideas to psychological development, some with psychology experiments, some with computational modeling, and some with both experiments and modeling. The reviewed work extends the current Bayesian revolution into tasks often studied in children, such as causal learning and word learning, and…

  7. Geometric Electron Models.

    ERIC Educational Resources Information Center

    Nika, G. Gerald; Parameswaran, R.

    1997-01-01

    Describes a visual approach for explaining the filling of electrons in the shells, subshells, and orbitals of the chemical elements. Enables students to apply the principles of atomic electron configuration while using manipulatives to model the building up of electron configurations as the atomic numbers of elements increase on the periodic…

  8. Self-calibrating models for dynamic monitoring and diagnosis

    NASA Technical Reports Server (NTRS)

    Kuipers, Benjamin

    1996-01-01

    A method for automatically building qualitative and semi-quantitative models of dynamic systems, and using them for monitoring and fault diagnosis, is developed and demonstrated. The qualitative approach and semi-quantitative method are applied to monitoring observation streams, and to design of non-linear control systems.

  9. OOMM--Object-Oriented Matrix Modelling: an instrument for the integration of the Brasilia Regional Health Information System.

    PubMed

    Cammarota, M; Huppes, V; Gaia, S; Degoulet, P

    1998-01-01

    The development of Health Information Systems is widely determined by the establishment of the underlying information models. An Object-Oriented Matrix Model (OOMM) is described which target is to facilitate the integration of the overall health system. The model is based on information modules named micro-databases that are structured in a three-dimensional network: planning, health structures and information systems. The modelling tool has been developed as a layer on top of a relational database system. A visual browser facilitates the development and maintenance of the information model. The modelling approach has been applied to the Brasilia University Hospital since 1991. The extension of the modelling approach to the Brasilia regional health system is considered.

  10. Different approaches in Partial Least Squares and Artificial Neural Network models applied for the analysis of a ternary mixture of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2014-03-01

    Different chemometric models were applied for the quantitative analysis of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in ternary mixture, namely, Partial Least Squares (PLS) as traditional chemometric model and Artificial Neural Networks (ANN) as advanced model. PLS and ANN were applied with and without variable selection procedure (Genetic Algorithm GA) and data compression procedure (Principal Component Analysis PCA). The chemometric methods applied are PLS-1, GA-PLS, ANN, GA-ANN and PCA-ANN. The methods were used for the quantitative analysis of the drugs in raw materials and pharmaceutical dosage form via handling the UV spectral data. A 3-factor 5-level experimental design was established resulting in 25 mixtures containing different ratios of the drugs. Fifteen mixtures were used as a calibration set and the other ten mixtures were used as validation set to validate the prediction ability of the suggested methods. The validity of the proposed methods was assessed using the standard addition technique.

  11. Rapid acquisition and model-based analysis of cell-free transcription–translation reactions from nonmodel bacteria

    PubMed Central

    Wienecke, Sarah; Ishwarbhai, Alka; Tsipa, Argyro; Aw, Rochelle; Kylilis, Nicolas; Bell, David J.; McClymont, David W.; Jensen, Kirsten; Biedendieck, Rebekka

    2018-01-01

    Native cell-free transcription–translation systems offer a rapid route to characterize the regulatory elements (promoters, transcription factors) for gene expression from nonmodel microbial hosts, which can be difficult to assess through traditional in vivo approaches. One such host, Bacillus megaterium, is a giant Gram-positive bacterium with potential biotechnology applications, although many of its regulatory elements remain uncharacterized. Here, we have developed a rapid automated platform for measuring and modeling in vitro cell-free reactions and have applied this to B. megaterium to quantify a range of ribosome binding site variants and previously uncharacterized endogenous constitutive and inducible promoters. To provide quantitative models for cell-free systems, we have also applied a Bayesian approach to infer ordinary differential equation model parameters by simultaneously using time-course data from multiple experimental conditions. Using this modeling framework, we were able to infer previously unknown transcription factor binding affinities and quantify the sharing of cell-free transcription–translation resources (energy, ribosomes, RNA polymerases, nucleotides, and amino acids) using a promoter competition experiment. This allows insights into resource limiting-factors in batch cell-free synthesis mode. Our combined automated and modeling platform allows for the rapid acquisition and model-based analysis of cell-free transcription–translation data from uncharacterized microbial cell hosts, as well as resource competition within cell-free systems, which potentially can be applied to a range of cell-free synthetic biology and biotechnology applications. PMID:29666238

  12. Forecasting daily lake levels using artificial intelligence approaches

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Shiri, Jalal; Nikoofar, Bagher

    2012-04-01

    Accurate prediction of lake-level variations is important for planning, design, construction, and operation of lakeshore structures and also in the management of freshwater lakes for water supply purposes. In the present paper, three artificial intelligence approaches, namely artificial neural networks (ANNs), adaptive-neuro-fuzzy inference system (ANFIS), and gene expression programming (GEP), were applied to forecast daily lake-level variations up to 3-day ahead time intervals. The measurements at the Lake Iznik in Western Turkey, for the period of January 1961-December 1982, were used for training, testing, and validating the employed models. The results obtained by the GEP approach indicated that it performs better than ANFIS and ANNs in predicting lake-level variations. A comparison was also made between these artificial intelligence approaches and convenient autoregressive moving average (ARMA) models, which demonstrated the superiority of GEP, ANFIS, and ANN models over ARMA models.

  13. A model predictive speed tracking control approach for autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Zhu, Min; Chen, Huiyan; Xiong, Guangming

    2017-03-01

    This paper presents a novel speed tracking control approach based on a model predictive control (MPC) framework for autonomous ground vehicles. A switching algorithm without calibration is proposed to determine the drive or brake control. Combined with a simple inverse longitudinal vehicle model and adaptive regulation of MPC, this algorithm can make use of the engine brake torque for various driving conditions and avoid high frequency oscillations automatically. A simplified quadratic program (QP) solving algorithm is used to reduce the computational time, and the approach has been applied in a 16-bit microcontroller. The performance of the proposed approach is evaluated via simulations and vehicle tests, which were carried out in a range of speed-profile tracking tasks. With a well-designed system structure, high-precision speed control is achieved. The system can robustly model uncertainty and external disturbances, and yields a faster response with less overshoot than a PI controller.

  14. Conformal Regression for Quantitative Structure-Activity Relationship Modeling-Quantifying Prediction Uncertainty.

    PubMed

    Svensson, Fredrik; Aniceto, Natalia; Norinder, Ulf; Cortes-Ciriano, Isidro; Spjuth, Ola; Carlsson, Lars; Bender, Andreas

    2018-05-29

    Making predictions with an associated confidence is highly desirable as it facilitates decision making and resource prioritization. Conformal regression is a machine learning framework that allows the user to define the required confidence and delivers predictions that are guaranteed to be correct to the selected extent. In this study, we apply conformal regression to model molecular properties and bioactivity values and investigate different ways to scale the resultant prediction intervals to create as efficient (i.e., narrow) regressors as possible. Different algorithms to estimate the prediction uncertainty were used to normalize the prediction ranges, and the different approaches were evaluated on 29 publicly available data sets. Our results show that the most efficient conformal regressors are obtained when using the natural exponential of the ensemble standard deviation from the underlying random forest to scale the prediction intervals, but other approaches were almost as efficient. This approach afforded an average prediction range of 1.65 pIC50 units at the 80% confidence level when applied to bioactivity modeling. The choice of nonconformity function has a pronounced impact on the average prediction range with a difference of close to one log unit in bioactivity between the tightest and widest prediction range. Overall, conformal regression is a robust approach to generate bioactivity predictions with associated confidence.

  15. Relative Performance of Rescaling and Resampling Approaches to Model Chi Square and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Johnathan; Hancock, Gregory R.

    Though common structural equation modeling (SEM) methods are predicated upon the assumption of multivariate normality, applied researchers often find themselves with data clearly violating this assumption and without sufficient sample size to use distribution-free estimation methods. Fortunately, promising alternatives are being integrated into…

  16. A Dual-Process Approach to Health Risk Decision Making: The Prototype Willingness Model

    ERIC Educational Resources Information Center

    Gerrard, Meg; Gibbons, Frederick X.; Houlihan, Amy E.; Stock, Michelle L.; Pomery, Elizabeth A.

    2008-01-01

    Although dual-process models in cognitive, personality, and social psychology have stimulated a large body of research about analytic and heuristic modes of decision making, these models have seldom been applied to the study of adolescent risk behaviors. In addition, the developmental course of these two kinds of information processing, and their…

  17. Addressing HIV in the School Setting: Application of a School Change Model

    ERIC Educational Resources Information Center

    Walsh, Audra St. John; Chenneville, Tiffany

    2013-01-01

    This paper describes best practices for responding to youth with human immunodeficiency virus (HIV) in the school setting through the application of a school change model designed by the World Health Organization. This model applies a whole school approach and includes four levels that span the continuum from universal prevention to direct…

  18. Formulating "Principles of Procedure" for the Foreign Language Classroom: A Framework for Process Model Language Curricula

    ERIC Educational Resources Information Center

    Villacañas de Castro, Luis S.

    2016-01-01

    This article aims to apply Stenhouse's process model of curriculum to foreign language (FL) education, a model which is characterized by enacting "principles of procedure" which are specific to the discipline which the school subject belongs to. Rather than to replace or dissolve current approaches to FL teaching and curriculum…

  19. Emerging from Depression: Treatment of Adolescent Depression Using the Major Treatment Models of Adult Depression.

    ERIC Educational Resources Information Center

    Long, Kathleen M.

    Noting that adolescents who commit suicide are often clinically depressed, this paper examines various approaches in the treatment of depression. Major treatment models of adult depression, which can be directly applied to the treatment of the depressed adolescent, are described. Major treatment models and selected research studies are reviewed in…

  20. Identifying and Applying the Communicative and the Constructivist Approaches To Facilitate Transfer of Knowledge in the Bilingual Classroom.

    ERIC Educational Resources Information Center

    Olivares, Rafael A.; Lemberger, Nancy

    2002-01-01

    Provides recommendations for the implementation of the communication, constructivism, and transference of knowledge (CCT) model in the education of English language learners (ELLS). Describes how the CCT model is identified in research studies and suggests specific recommendations to facilitate the implementation of the model in the education of…

Top