Sample records for model analysis shows

  1. A quantitative analysis of the F18 flight control system

    NASA Technical Reports Server (NTRS)

    Doyle, Stacy A.; Dugan, Joanne B.; Patterson-Hine, Ann

    1993-01-01

    This paper presents an informal quantitative analysis of the F18 flight control system (FCS). The analysis technique combines a coverage model with a fault tree model. To demonstrate the method's extensive capabilities, we replace the fault tree with a digraph model of the F18 FCS, the only model available to us. The substitution shows that while digraphs have primarily been used for qualitative analysis, they can also be used for quantitative analysis. Based on our assumptions and the particular failure rates assigned to the F18 FCS components, we show that coverage does have a significant effect on the system's reliability and thus it is important to include coverage in the reliability analysis.

  2. Symmetric functions and wavefunctions of XXZ-type six-vertex models and elliptic Felderhof models by Izergin-Korepin analysis

    NASA Astrophysics Data System (ADS)

    Motegi, Kohei

    2018-05-01

    We present a method to analyze the wavefunctions of six-vertex models by extending the Izergin-Korepin analysis originally developed for domain wall boundary partition functions. First, we apply the method to the case of the basic wavefunctions of the XXZ-type six-vertex model. By giving the Izergin-Korepin characterization of the wavefunctions, we show that these wavefunctions can be expressed as multiparameter deformations of the quantum group deformed Grothendieck polynomials. As a second example, we show that the Izergin-Korepin analysis is effective for analysis of the wavefunctions for a triangular boundary and present the explicit forms of the symmetric functions representing these wavefunctions. As a third example, we apply the method to the elliptic Felderhof model which is a face-type version and an elliptic extension of the trigonometric Felderhof model. We show that the wavefunctions can be expressed as one-parameter deformations of an elliptic analog of the Vandermonde determinant and elliptic symmetric functions.

  3. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  4. Analysis on the crime model using dynamical approach

    NASA Astrophysics Data System (ADS)

    Mohammad, Fazliza; Roslan, Ummu'Atiqah Mohd

    2017-08-01

    A research is carried out to analyze a dynamical model of the spread crime system. A Simplified 2-Dimensional Model is used in this research. The objectives of this research are to investigate the stability of the model of the spread crime, to summarize the stability by using a bifurcation analysis and to study the relationship of basic reproduction number, R0 with the parameter in the model. Our results for stability of equilibrium points shows that we have two types of stability, which are asymptotically stable and saddle node. While the result for bifurcation analysis shows that the number of criminally active and incarcerated increases as we increase the value of a parameter in the model. The result for the relationship of R0 with the parameter shows that as the parameter increases, R0 increase too, and the rate of crime increase too.

  5. Dynamical analysis of cigarette smoking model with a saturated incidence rate

    NASA Astrophysics Data System (ADS)

    Zeb, Anwar; Bano, Ayesha; Alzahrani, Ebraheem; Zaman, Gul

    2018-04-01

    In this paper, we consider a delayed smoking model in which the potential smokers are assumed to satisfy the logistic equation. We discuss the dynamical behavior of our proposed model in the form of Delayed Differential Equations (DDEs) and show conditions for asymptotic stability of the model in steady state. We also discuss the Hopf bifurcation analysis of considered model. Finally, we use the nonstandard finite difference (NSFD) scheme to show the results graphically with help of MATLAB.

  6. [Analysis of the stability and adaptability of near infrared spectra qualitative analysis model].

    PubMed

    Cao, Wu; Li, Wei-jun; Wang, Ping; Zhang, Li-ping

    2014-06-01

    The stability and adaptability of model of near infrared spectra qualitative analysis were studied. Method of separate modeling can significantly improve the stability and adaptability of model; but its ability of improving adaptability of model is limited. Method of joint modeling can not only improve the adaptability of the model, but also the stability of model, at the same time, compared to separate modeling, the method can shorten the modeling time, reduce the modeling workload; extend the term of validity of model, and improve the modeling efficiency. The experiment of model adaptability shows that, the correct recognition rate of separate modeling method is relatively low, which can not meet the requirements of application, and joint modeling method can reach the correct recognition rate of 90%, and significantly enhances the recognition effect. The experiment of model stability shows that, the identification results of model by joint modeling are better than the model by separate modeling, and has good application value.

  7. INNOVATIVE INSTRUMENTATION AND ANALYSIS OF THE TEMPERATURE MEASUREMENT FOR HIGH TEMPERATURE GASIFICATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seong W. Lee

    During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less

  8. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  9. Dynamical systems analysis of phantom dark energy models

    NASA Astrophysics Data System (ADS)

    Roy, Nandan; Bhadra, Nivedita

    2018-06-01

    In this work, we study the dynamical systems analysis of phantom dark energy models considering five different potentials. From the analysis of these five potentials we have found a general parametrization of the scalar field potentials which is obeyed by many other potentials. Our investigation shows that there is only one fixed point which could be the beginning of the universe. However, future destiny has many possible options. A detailed numerical analysis of the system has been presented. The observed late time behaviour in this analysis shows very good agreement with the recent observations.

  10. Flocking of the Motsch-Tadmor Model with a Cut-Off Interaction Function

    NASA Astrophysics Data System (ADS)

    Jin, Chunyin

    2018-04-01

    In this paper, we study the flocking behavior of the Motsch-Tadmor model with a cut-off interaction function. Our analysis shows that connectedness is important for flocking of this kind of model. Fortunately, we get a sufficient condition imposed only on the model parameters and initial data to guarantee the connectedness of the neighbor graph associated with the system. Then we present a theoretical analysis for flocking, and show that the system achieves consensus at an exponential rate.

  11. Applying Model Analysis to a Resource-Based Analysis of the Force and Motion Conceptual Evaluation

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Wittmann, Michael C.; Carter, Tom

    2014-01-01

    Previously, we analyzed the Force and Motion Conceptual Evaluation in terms of a resources-based model that allows for clustering of questions so as to provide useful information on how students correctly or incorrectly reason about physics. In this paper, we apply model analysis to show that the associated model plots provide more information…

  12. Multicenter external validation of two malignancy risk prediction models in patients undergoing 18F-FDG-PET for solitary pulmonary nodule evaluation.

    PubMed

    Perandini, Simone; Soardi, G A; Larici, A R; Del Ciello, A; Rizzardi, G; Solazzo, A; Mancino, L; Zeraj, F; Bernhart, M; Signorini, M; Motton, M; Montemezzi, S

    2017-05-01

    To achieve multicentre external validation of the Herder and Bayesian Inference Malignancy Calculator (BIMC) models. Two hundred and fifty-nine solitary pulmonary nodules (SPNs) collected from four major hospitals which underwent 18-FDG-PET characterization were included in this multicentre retrospective study. The Herder model was tested on all available lesions (group A). A subgroup of 180 SPNs (group B) was used to provide unbiased comparison between the Herder and BIMC models. Receiver operating characteristic (ROC) area under the curve (AUC) analysis was performed to assess diagnostic accuracy. Decision analysis was performed by adopting the risk threshold stated in British Thoracic Society (BTS) guidelines. Unbiased comparison performed In Group B showed a ROC AUC for the Herder model of 0.807 (95 % CI 0.742-0.862) and for the BIMC model of 0.822 (95 % CI 0.758-0.875). Both the Herder and the BIMC models were proven to accurately predict the risk of malignancy when tested on a large multicentre external case series. The BIMC model seems advantageous on the basis of a more favourable decision analysis. • The Herder model showed a ROC AUC of 0.807 on 180 SPNs. • The BIMC model showed a ROC AUC of 0.822 on 180 SPNs. • Decision analysis is more favourable to the BIMC model.

  13. Immortal time bias in observational studies of time-to-event outcomes.

    PubMed

    Jones, Mark; Fowler, Robert

    2016-12-01

    The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Future costs in cost effectiveness analysis.

    PubMed

    Lee, Robert H

    2008-07-01

    This paper resolves several controversies in CEA. Generalizing [Garber, A.M., Phelps, C.E., 1997. Economic foundations of cost-effectiveness analysis. Journal of Health Economics 16 (1), 1-31], the paper shows accounting for unrelated future costs distorts decision making. After replicating [Meltzer, D., 1997. Accounting for future costs in medical cost-effectiveness analysis. Journal of Health Economics 16 (1), 33-64] quite different conclusion that unrelated future costs should be included in CEA, the paper shows that Meltzer's findings result from modeling the budget constraint as an annuity, which is problematic. The paper also shows that related costs should be included in CEA. This holds for a variety of models, including a health maximization model. CEA should treat costs in the manner recommended by Garber and Phelps.

  15. Analytics For Distracted Driver Behavior Modeling in Dilemma Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jan-Mou; Malikopoulos, Andreas; Thakur, Gautam

    2014-01-01

    In this paper, we present the results obtained and insights gained through the analysis of TRB contest data. We used exploratory analysis, regression, and clustering models for gaining insights into the driver behavior in a dilemma zone while driving under distraction. While simple exploratory analysis showed the distinguishing driver behavior patterns among different popu- lation groups in the dilemma zone, regression analysis showed statically signification relationships between groups of variables. In addition to analyzing the contest data, we have also looked into the possible impact of distracted driving on the fuel economy.

  16. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  17. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  18. Estimating animal resource selection from telemetry data using point process models

    USGS Publications Warehouse

    Johnson, Devin S.; Hooten, Mevin B.; Kuhn, Carey E.

    2013-01-01

    To demonstrate the analysis of telemetry data with the point process approach, we analysed a data set of telemetry locations from northern fur seals (Callorhinus ursinus) in the Pribilof Islands, Alaska. Both a space–time and an aggregated space-only model were fitted. At the individual level, the space–time analysis showed little selection relative to the habitat covariates. However, at the study area level, the space-only model showed strong selection relative to the covariates.

  19. Review and developments of dissemination models for airborne carbon fibers

    NASA Technical Reports Server (NTRS)

    Elber, W.

    1980-01-01

    Dissemination prediction models were reviewed to determine their applicability to a risk assessment for airborne carbon fibers. The review showed that the Gaussian prediction models using partial reflection at the ground agreed very closely with a more elaborate diffusion analysis developed for the study. For distances beyond 10,000 m the Gaussian models predicted a slower fall-off in exposure levels than the diffusion models. This resulting level of conservatism was preferred for the carbon fiber risk assessment. The results also showed that the perfect vertical-mixing models developed herein agreed very closely with the diffusion analysis for all except the most stable atmospheric conditions.

  20. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  1. Advancing cloud lifecycle representation in numerical models using innovative analysis methods that bridge arm observations over a breadth of scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tselioudis, George

    2016-03-04

    From its location on the subtropics-midlatitude boundary, the Azores is influenced by both the subtropical high pressure and the midlatitude baroclinic storm regimes, and therefore experiences a wide range of cloud structures, from fair-weather scenes to stratocumulus sheets to deep convective systems. This project combined three types of data sets to study cloud variability in the Azores: a satellite analysis of cloud regimes, a reanalysis characterization of storminess, and a 19-month field campaign that occurred on Graciosa Island. Combined analysis of the three data sets provides a detailed picture of cloud variability and the respective dynamic influences, with emphasis onmore » low clouds that constitute a major uncertainty source in climate model simulations. The satellite cloud regime analysis shows that the Azores cloud distribution is similar to the mean global distribution and can therefore be used to evaluate cloud simulation in global models. Regime analysis of low clouds shows that stratocumulus decks occur under the influence of the Azores high-pressure system, while shallow cumulus clouds are sustained by cold-air outbreaks, as revealed by their preference for post-frontal environments and northwesterly flows. An evaluation of CMIP5 climate model cloud regimes over the Azores shows that all models severely underpredict shallow cumulus clouds, while most models also underpredict the occurrence of stratocumulus cloud decks. It is demonstrated that carefully selected case studies can be related through regime analysis to climatological cloud distributions, and a methodology is suggested utilizing process-resolving model simulations of individual cases to better understand cloud-dynamics interactions and attempt to explain and correct climate model cloud deficiencies.« less

  2. Applying STAMP in Accident Analysis

    NASA Technical Reports Server (NTRS)

    Leveson, Nancy; Daouk, Mirna; Dulac, Nicolas; Marais, Karen

    2003-01-01

    Accident models play a critical role in accident investigation and analysis. Most traditional models are based on an underlying chain of events. These models, however, have serious limitations when used for complex, socio-technical systems. Previously, Leveson proposed a new accident model (STAMP) based on system theory. In STAMP, the basic concept is not an event but a constraint. This paper shows how STAMP can be applied to accident analysis using three different views or models of the accident process and proposes a notation for describing this process.

  3. Comparative analysis of used car price evaluation models

    NASA Astrophysics Data System (ADS)

    Chen, Chuancan; Hao, Lulu; Xu, Cong

    2017-05-01

    An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.

  4. Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images

    DTIC Science & Technology

    2001-10-25

    Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for

  5. Brazilian Soybean Production: Emergy Analysis with an Expanded Scope

    ERIC Educational Resources Information Center

    Ortega, Enrique; Cavalett, Otavio; Bonifacio, Robert; Watanabe, Marcos

    2005-01-01

    This article offers the results of emergy analysis used to evaluate four different soybean production systems in Brazil that were divided into two main categories: biological models (organic and ecological farms) and industrial models (green-revolution chemical farms and herbicide with no-tillage farms). The biological models show better…

  6. Experimental testing and constitutive modeling of the mechanical properties of the swine skin tissue.

    PubMed

    Łagan, Sylwia D; Liber-Kneć, Aneta

    2017-01-01

    The aim of the study was an estimation of the possibility of using hyperelastic material models to fit experimental data obtained in the tensile test for the swine skin tissue. The uniaxial tensile tests of samples taken from the abdomen and back of a pig was carried out. The mechanical properties of the skin such as the mean Young's modulus, the mean maximum stress and the mean maximum elongation were calculated. The experimental data have been used to identify the parameters in specific strain-energy functions given in seven constitutive models of hyperelastic materials: neo-Hookean, Mooney-Rivlin, Ogden, Yeoh, Martins, Humphrey and Veronda-Westmann. An analysis of errors in fitting of theoretical and experimental data was done. Comparison of load -displacement curves for the back and abdomen regions of skin taken showed a different scope of both the mean maximum loading forces and the mean maximum elongation. Samples which have been prepared from the abdominal area had lower values of the mean maximum load compared to samples from the spine area. The reverse trend was observed during the analysis of the values of elongation. An analysis of the accuracy of model fitting to the experimental data showed that, the least accurate were the model of neo- -Hookean, model of Mooney-Rivlin for the abdominal region and model of Veronda-Westmann for the spine region. An analysis of seven hyperelastic material models showed good correlations between the experimental and the theoretical data for five models.

  7. Development and analysis of a twelfth degree and order gravity model for Mars

    NASA Technical Reports Server (NTRS)

    Christensen, E. J.; Balmino, G.

    1979-01-01

    Satellite geodesy techniques previously applied to artificial earth satellites have been extended to obtain a high-resolution gravity field for Mars. Two-way Doppler data collected by 10 Deep Space Network (DSN) stations during Mariner 9 and Viking 1 and 2 missions have been processed to obtain a twelfth degree and order spherical harmonic model for the martian gravitational potential. The quality of this model was evaluated by examining the rms residuals within the fit and the ability of the model to predict the spacecraft state beyond the fit. Both indicators show that more data and higher degree and order harmonics will be required to further refine our knowledge of the martian gravity field. The model presented shows much promise, since it resolves local gravity features which correlate highly with the martian topography. An isostatic analysis based on this model, as well as an error analysis, shows rather complete compensation on a global (long wavelength) scale. Though further model refinements are necessary to be certain, local (short wavelength) features such as the shield volcanos in Tharsis appear to be uncompensated. These are interpreted to place some bounds on the internal structure of Mars.

  8. Failure analysis and modeling of a multicomputer system. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Subramani, Sujatha Srinivasan

    1990-01-01

    This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).

  9. Comparing the Fit of Item Response Theory and Factor Analysis Models

    ERIC Educational Resources Information Center

    Maydeu-Olivares, Alberto; Cai, Li; Hernandez, Adolfo

    2011-01-01

    Linear factor analysis (FA) models can be reliably tested using test statistics based on residual covariances. We show that the same statistics can be used to reliably test the fit of item response theory (IRT) models for ordinal data (under some conditions). Hence, the fit of an FA model and of an IRT model to the same data set can now be…

  10. A Comprehensive Availability Modeling and Analysis of a Virtualized Servers System Using Stochastic Reward Nets

    PubMed Central

    Kim, Dong Seong; Park, Jong Sou

    2014-01-01

    It is important to assess availability of virtualized systems in IT business infrastructures. Previous work on availability modeling and analysis of the virtualized systems used a simplified configuration and assumption in which only one virtual machine (VM) runs on a virtual machine monitor (VMM) hosted on a physical server. In this paper, we show a comprehensive availability model using stochastic reward nets (SRN). The model takes into account (i) the detailed failures and recovery behaviors of multiple VMs, (ii) various other failure modes and corresponding recovery behaviors (e.g., hardware faults, failure and recovery due to Mandelbugs and aging-related bugs), and (iii) dependency between different subcomponents (e.g., between physical host failure and VMM, etc.) in a virtualized servers system. We also show numerical analysis on steady state availability, downtime in hours per year, transaction loss, and sensitivity analysis. This model provides a new finding on how to increase system availability by combining both software rejuvenations at VM and VMM in a wise manner. PMID:25165732

  11. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  12. Continuum modeling of three-dimensional truss-like space structures

    NASA Technical Reports Server (NTRS)

    Nayfeh, A. H.; Hefzy, M. S.

    1978-01-01

    A mathematical and computational analysis capability has been developed for calculating the effective mechanical properties of three-dimensional periodic truss-like structures. Two models are studied in detail. The first, called the octetruss model, is a three-dimensional extension of a two-dimensional model, and the second is a cubic model. Symmetry considerations are employed as a first step to show that the specific octetruss model has four independent constants and that the cubic model has two. The actual values of these constants are determined by averaging the contributions of each rod element to the overall structure stiffness. The individual rod member contribution to the overall stiffness is obtained by a three-dimensional coordinate transformation. The analysis shows that the effective three-dimensional elastic properties of both models are relatively close to each other.

  13. Plant interspecies competition for sunlight: a mathematical model of canopy partitioning.

    PubMed

    Nevai, Andrew L; Vance, Richard R

    2007-07-01

    We examine the influence of canopy partitioning on the outcome of competition between two plant species that interact only by mutually shading each other. This analysis is based on a Kolmogorov-type canopy partitioning model for plant species with clonal growth form and fixed vertical leaf profiles (Vance and Nevai in J. Theor. Biol., 2007, to appear). We show that canopy partitioning is necessary for the stable coexistence of the two competing plant species. We also use implicit methods to show that, under certain conditions, the species' nullclines can intersect at most once. We use nullcline endpoint analysis to show that when the nullclines do intersect, and in such a way that they cross, then the resulting equilibrium point is always stable. We also construct surfaces that divide parameter space into regions within which the various outcomes of competition occur, and then study parameter dependence in the locations of these surfaces. The analysis presented here and in a companion paper (Nevai and Vance, The role of leaf height in plant competition for sunlight: analysis of a canopy partitioning model, in review) together shows that canopy partitioning is both necessary and, under appropriate parameter values, sufficient for the stable coexistence of two hypothetical plant species whose structure and growth are described by our model.

  14. Stability Analysis of the Slowed-Rotor Compound Helicopter Configuration

    NASA Technical Reports Server (NTRS)

    Floros, Matthew W.; Johnson, Wayne

    2007-01-01

    The stability and control of rotors at high advance ratio are considered. Teetering, articulated, gimbaled, and rigid hub types are considered for a compound helicopter (rotor and fixed wing). Stability predictions obtained using an analytical rigid flapping blade analysis, a rigid blade CAMRAD II model, and an elastic blade CAMRAD II model are compared. For the flapping blade analysis, the teetering rotor is the most stable, showing no instabilities up to an advance ratio of 3 and a Lock number of 18. A notional elastic blade model of a teetering rotor is unstable at an advance ratio of 1.5, independent of pitch frequency. Analysis of the trim controls and blade flapping shows that for small positive collective pitch, trim can be maintained without excessive control input or flapping angles.

  15. Social comparison and perceived breach of psychological contract: their effects on burnout in a multigroup analysis.

    PubMed

    Cantisano, Gabriela Topa; Domínguez, J Francisco Morales; García, J Luis Caeiro

    2007-05-01

    This study focuses on the mediator role of social comparison in the relationship between perceived breach of psychological contract and burnout. A previous model showing the hypothesized effects of perceived breach on burnout, both direct and mediated, is proposed. The final model reached an optimal fit to the data and was confirmed through multigroup analysis using a sample of Spanish teachers (N = 401) belonging to preprimary, primary, and secondary schools. Multigroup analyses showed that the model fit all groups adequately.

  16. Modeling Opponents in Adversarial Risk Analysis.

    PubMed

    Rios Insua, David; Banks, David; Rios, Jesus

    2016-04-01

    Adversarial risk analysis has been introduced as a framework to deal with risks derived from intentional actions of adversaries. The analysis supports one of the decisionmakers, who must forecast the actions of the other agents. Typically, this forecast must take account of random consequences resulting from the set of selected actions. The solution requires one to model the behavior of the opponents, which entails strategic thinking. The supported agent may face different kinds of opponents, who may use different rationality paradigms, for example, the opponent may behave randomly, or seek a Nash equilibrium, or perform level-k thinking, or use mirroring, or employ prospect theory, among many other possibilities. We describe the appropriate analysis for these situations, and also show how to model the uncertainty about the rationality paradigm used by the opponent through a Bayesian model averaging approach, enabling a fully decision-theoretic solution. We also show how as we observe an opponent's decision behavior, this approach allows learning about the validity of each of the rationality models used to predict his decision by computing the models' (posterior) probabilities, which can be understood as a measure of their validity. We focus on simultaneous decision making by two agents. © 2015 Society for Risk Analysis.

  17. AHP-based spatial analysis of water quality impact assessment due to change in vehicular traffic caused by highway broadening in Sikkim Himalaya

    NASA Astrophysics Data System (ADS)

    Banerjee, Polash; Ghose, Mrinal Kanti; Pradhan, Ratika

    2018-05-01

    Spatial analysis of water quality impact assessment of highway projects in mountainous areas remains largely unexplored. A methodology is presented here for Spatial Water Quality Impact Assessment (SWQIA) due to highway-broadening-induced vehicular traffic change in the East district of Sikkim. Pollution load of the highway runoff was estimated using an Average Annual Daily Traffic-Based Empirical model in combination with mass balance model to predict pollution in the rivers within the study area. Spatial interpolation and overlay analysis were used for impact mapping. Analytic Hierarchy Process-Based Water Quality Status Index was used to prepare a composite impact map. Model validation criteria, cross-validation criteria, and spatial explicit sensitivity analysis show that the SWQIA model is robust. The study shows that vehicular traffic is a significant contributor to water pollution in the study area. The model is catering specifically to impact analysis of the concerned project. It can be an aid for decision support system for the project stakeholders. The applicability of SWQIA model needs to be explored and validated in the context of a larger set of water quality parameters and project scenarios at a greater spatial scale.

  18. Using Toulmin analysis to analyse an instructor's proof presentation in abstract algebra

    NASA Astrophysics Data System (ADS)

    Fukawa-connelly, Timothy

    2014-01-01

    This paper provides a method for analysing undergraduate teaching of proof-based courses using Toulmin's model (1969) of argumentation. It presents a case study of one instructor's presentation of proofs. The analysis shows that the instructor presents different levels of detail in different proofs; thus, the students have an inconsistent set of written models for their work. Similarly, the analysis shows that the details the instructor says aloud differ from what she writes down. Although her verbal commentary provides additional detail and appears to have pedagogical value, for instance, by modelling thinking that supports proof writing, this value might be better realized if she were to change her teaching practices.

  19. Association between glutathione S-transferase P1 Ile (105) Val gene polymorphism and chronic obstructive pulmonary disease: A meta-analysis based on seventeen case-control studies.

    PubMed

    Yang, Lingjing; Li, Xixia; Tong, Xiang; Fan, Hong

    2015-12-01

    Previous studies have shown that glutathione S-transferase P1 (GSTP1) was associated with chronic obstructive pulmonary disease (COPD). However, the association between GSTP1 Ile (105) Val gene polymorphism and COPD remains controversial. To drive a more precise estimation, we performed a meta-analysis based on published case-control studies. An electronic search of PubMed, EMBASE, Cochrane library, Web of Science and China Knowledge Resource Integrated (CNKI) Database for papers on GSTP1 Ile (105) Val gene polymorphism and COPD risk was performed. The pooled odds ratios (ORs) with 95% confidence intervals (CIs) were used to assess the strength of association in the homozygote model, heterozygote model, dominant model, recessive model and an additive mode. Statistical heterogeneity, test of publication bias and sensitivity analysis was performed. The software STATA (Version 13.0) was used data analysis. Overall, seventeen studies with 1892 cases and 2012 controls were included in this meta-analysis. The GSTP1 Ile (105) Val polymorphism showed pooled odds ratios for the homozygote comparison (OR = 1.501, 95%CI [0.862, 2.614]), heterozygote comparison (OR = 0.924, 95%CI [0.733, 1.165]), dominant model (OR = 1.003, 95%CI [0.756, 1.331]), recessive model (OR = 1.510, 95%CI [0.934, 2.439]), and an additive model (OR = 1.072, 95%CI [0.822, 1.398]). In conclusion, the current meta-analysis, based on the most updated information, showed no significant association between GSTP1 Ile (105) Val gene polymorphism and COPD risk in any genetic models. The results of subgroup analysis also showed no significant association between GSTP1 Ile (105) Val gene polymorphism and COPD risk in Asian population and Caucasian population. Further studies involving large populations and careful control with age, sex, ethnicity, and cigarette smoking are greatly needed.

  20. Association between glutathione S-transferase P1 Ile (105) Val gene polymorphism and chronic obstructive pulmonary disease: A meta-analysis based on seventeen case–control studies

    PubMed Central

    Yang, Lingjing; Li, Xixia; Tong, Xiang; Fan, Hong

    2015-01-01

    Introduction Previous studies have shown that glutathione S-transferase P1 (GSTP1) was associated with chronic obstructive pulmonary disease (COPD). However, the association between GSTP1 Ile (105) Val gene polymorphism and COPD remains controversial. To drive a more precise estimation, we performed a meta-analysis based on published case–control studies. Methods An electronic search of PubMed, EMBASE, Cochrane library, Web of Science and China Knowledge Resource Integrated (CNKI) Database for papers on GSTP1 Ile (105) Val gene polymorphism and COPD risk was performed. The pooled odds ratios (ORs) with 95% confidence intervals (CIs) were used to assess the strength of association in the homozygote model, heterozygote model, dominant model, recessive model and an additive mode. Statistical heterogeneity, test of publication bias and sensitivity analysis was performed. The software STATA (Version 13.0) was used data analysis. Results Overall, seventeen studies with 1892 cases and 2012 controls were included in this meta-analysis. The GSTP1 Ile (105) Val polymorphism showed pooled odds ratios for the homozygote comparison (OR = 1.501, 95%CI [0.862, 2.614]), heterozygote comparison (OR = 0.924, 95%CI [0.733, 1.165]), dominant model (OR = 1.003, 95%CI [0.756, 1.331]), recessive model (OR = 1.510, 95%CI [0.934, 2.439]), and an additive model (OR = 1.072, 95%CI [0.822, 1.398]). Conclusions In conclusion, the current meta-analysis, based on the most updated information, showed no significant association between GSTP1 Ile (105) Val gene polymorphism and COPD risk in any genetic models. The results of subgroup analysis also showed no significant association between GSTP1 Ile (105) Val gene polymorphism and COPD risk in Asian population and Caucasian population. Further studies involving large populations and careful control with age, sex, ethnicity, and cigarette smoking are greatly needed. PMID:26504746

  1. Integrating Household Risk Mitigation Behavior in Flood Risk Analysis: An Agent-Based Model Approach.

    PubMed

    Haer, Toon; Botzen, W J Wouter; de Moel, Hans; Aerts, Jeroen C J H

    2017-10-01

    Recent studies showed that climate change and socioeconomic trends are expected to increase flood risks in many regions. However, in these studies, human behavior is commonly assumed to be constant, which neglects interaction and feedback loops between human and environmental systems. This neglect of human adaptation leads to a misrepresentation of flood risk. This article presents an agent-based model that incorporates human decision making in flood risk analysis. In particular, household investments in loss-reducing measures are examined under three economic decision models: (1) expected utility theory, which is the traditional economic model of rational agents; (2) prospect theory, which takes account of bounded rationality; and (3) a prospect theory model, which accounts for changing risk perceptions and social interactions through a process of Bayesian updating. We show that neglecting human behavior in flood risk assessment studies can result in a considerable misestimation of future flood risk, which is in our case study an overestimation of a factor two. Furthermore, we show how behavior models can support flood risk analysis under different behavioral assumptions, illustrating the need to include the dynamic adaptive human behavior of, for instance, households, insurers, and governments. The method presented here provides a solid basis for exploring human behavior and the resulting flood risk with respect to low-probability/high-impact risks. © 2016 The Authors Risk Analysis published by Wiley Periodicals, Inc. on behalf of Society for Risk Analysis.

  2. A study of the extended-range forecasting problem blocking

    NASA Technical Reports Server (NTRS)

    Chen, T. C.; Marshall, H. G.; Shukla, J.

    1981-01-01

    Wavenumber frequency spectral analysis of a 90 day winter (Jan. 15 - April 14) wind field simulated by a climate experiment of the GLAS atmospheric circulation model is made using the space time Fourier analysis which is modified with Tukey's numerical spectral analysis. Computations are also made to examine how the model wave disturbances in the wavenumber frequency domain are maintained by nonlinear interactions. Results are compared with observation. It is found that equatorial easterlies do not show up in this climate experiment at 200 mb. The zonal kinetic energy and momentum transport of stationary waves are too small in the model's Northern Hemisphere. The wavenumber and frequency spectra of the model are generally in good agreement with observation. However, some distinct features of the model's spectra are revealed. The wavenumber spectra of kinetic energy show that the eastward moving waves of low wavenumbers have stronger zonal motion while the eastward moving waves of intermediate wavenumbers have larger meridional motion compared with observation. Furthermore, the eastward moving waves show a band of large spectral value in the medium frequency regime.

  3. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  4. Spatial Dependence and Heterogeneity in Bayesian Factor Analysis: A Cross-National Investigation of Schwartz Values

    ERIC Educational Resources Information Center

    Stakhovych, Stanislav; Bijmolt, Tammo H. A.; Wedel, Michel

    2012-01-01

    In this article, we present a Bayesian spatial factor analysis model. We extend previous work on confirmatory factor analysis by including geographically distributed latent variables and accounting for heterogeneity and spatial autocorrelation. The simulation study shows excellent recovery of the model parameters and demonstrates the consequences…

  5. Using structural equation modeling for network meta-analysis.

    PubMed

    Tu, Yu-Kang; Wu, Yun-Chun

    2017-07-14

    Network meta-analysis overcomes the limitations of traditional pair-wise meta-analysis by incorporating all available evidence into a general statistical framework for simultaneous comparisons of several treatments. Currently, network meta-analyses are undertaken either within the Bayesian hierarchical linear models or frequentist generalized linear mixed models. Structural equation modeling (SEM) is a statistical method originally developed for modeling causal relations among observed and latent variables. As random effect is explicitly modeled as a latent variable in SEM, it is very flexible for analysts to specify complex random effect structure and to make linear and nonlinear constraints on parameters. The aim of this article is to show how to undertake a network meta-analysis within the statistical framework of SEM. We used an example dataset to demonstrate the standard fixed and random effect network meta-analysis models can be easily implemented in SEM. It contains results of 26 studies that directly compared three treatment groups A, B and C for prevention of first bleeding in patients with liver cirrhosis. We also showed that a new approach to network meta-analysis based on the technique of unrestricted weighted least squares (UWLS) method can also be undertaken using SEM. For both the fixed and random effect network meta-analysis, SEM yielded similar coefficients and confidence intervals to those reported in the previous literature. The point estimates of two UWLS models were identical to those in the fixed effect model but the confidence intervals were greater. This is consistent with results from the traditional pairwise meta-analyses. Comparing to UWLS model with common variance adjusted factor, UWLS model with unique variance adjusted factor has greater confidence intervals when the heterogeneity was larger in the pairwise comparison. The UWLS model with unique variance adjusted factor reflects the difference in heterogeneity within each comparison. SEM provides a very flexible framework for univariate and multivariate meta-analysis, and its potential as a powerful tool for advanced meta-analysis is still to be explored.

  6. Rich analysis and rational models: Inferring individual behavior from infant looking data

    PubMed Central

    Piantadosi, Steven T.; Kidd, Celeste; Aslin, Richard

    2013-01-01

    Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive model to create a rich data analysis framework for infant looking times. We formalize (i) a statistical learning model (ii) a parametric linking between the learning model's beliefs and infants' looking behavior, and (iii) a data analysis model that infers parameters of the cognitive model and linking function for groups and individuals. Using this approach, we show that recent findings from Kidd, Piantadosi, and Aslin (2012) of a U-shaped relationship between look-away probability and stimulus complexity even holds within infants and is not due to averaging subjects with different types of behavior. Our results indicate that individual infants prefer stimuli of intermediate complexity, reserving attention for events that are moderately predictable given their probabilistic expectations about the world. PMID:24750256

  7. Rich analysis and rational models: inferring individual behavior from infant looking data.

    PubMed

    Piantadosi, Steven T; Kidd, Celeste; Aslin, Richard

    2014-05-01

    Studies of infant looking times over the past 50 years have provided profound insights about cognitive development, but their dependent measures and analytic techniques are quite limited. In the context of infants' attention to discrete sequential events, we show how a Bayesian data analysis approach can be combined with a rational cognitive model to create a rich data analysis framework for infant looking times. We formalize (i) a statistical learning model, (ii) a parametric linking between the learning model's beliefs and infants' looking behavior, and (iii) a data analysis approach and model that infers parameters of the cognitive model and linking function for groups and individuals. Using this approach, we show that recent findings from Kidd, Piantadosi and Aslin (iv) of a U-shaped relationship between look-away probability and stimulus complexity even holds within infants and is not due to averaging subjects with different types of behavior. Our results indicate that individual infants prefer stimuli of intermediate complexity, reserving attention for events that are moderately predictable given their probabilistic expectations about the world. © 2014 John Wiley & Sons Ltd.

  8. Tropospheric ozone in the western Pacific Rim: Analysis of satellite and surface-based observations along with comprehensive 3-D model simulations

    NASA Technical Reports Server (NTRS)

    Young, Sun-Woo; Carmichael, Gregory R.

    1994-01-01

    Tropospheric ozone production and transport in mid-latitude eastern Asia is studied. Data analysis of surface-based ozone measurements in Japan and satellite-based tropospheric column measurements of the entire western Pacific Rim are combined with results from three-dimensional model simulations to investigate the diurnal, seasonal and long-term variations of ozone in this region. Surface ozone measurements from Japan show distinct seasonal variation with a spring peak and summer minimum. Satellite studies of the entire tropospheric column of ozone show high concentrations in both the spring and summer seasons. Finally, preliminary model simulation studies show good agreement with observed values.

  9. [Establishment of a 3D finite element model of human skull using MSCT images and mimics software].

    PubMed

    Huang, Ping; Li, Zheng-dong; Shao, Yu; Zou, Dong-hua; Liu, Ning-guo; Li, Li; Chen, Yuan-yuan; Wan, Lei; Chen, Yi-jiu

    2011-02-01

    To establish a human 3D finite element skull model, and to explore its value in biomechanics analysis. The cadaveric head was scanned and then 3D skull model was created using Mimics software based on 2D CT axial images. The 3D skull model was optimized by preprocessor along with creation of the surface and volume meshes. The stress changes, after the head was struck by an object or the head hit the ground directly, were analyzed using ANSYS software. The original 3D skull model showed a large number of triangles with a poor quality and high similarity with the real head, while the optimized model showed high quality surface and volume meshes with a small number of triangles comparatively. The model could show the local and global stress changes effectively. The human 3D skull model can be established using MSCT and Mimics software and provides a good finite element model for biomechanics analysis. This model may also provide a base for the study of head stress changes following different forces.

  10. Effect of High-Frequency Transcranial Magnetic Stimulation on Craving in Substance Use Disorder: A Meta-Analysis.

    PubMed

    Maiti, Rituparna; Mishra, Biswa Ranjan; Hota, Debasish

    2017-01-01

    Repetitive transcranial magnetic stimulation (rTMS), a noninvasive, neuromodulatory tool, has been used to reduce craving in different substance use disorders. There are some studies that have reported conflicting and inconclusive results; therefore, this meta-analysis was conducted to evaluate the effect of high-frequency rTMS on craving in substance use disorder and to investigate the reasons behind the inconsistency across the studies. The authors searched clinical trials from MEDLINE, Cochrane databases, and International Clinical Trials Registry Platform. The PRISMA guidelines, as well as recommended meta-analysis practices, were followed in the selection process, analysis, and reporting of the findings. The effect estimate used was the standardized mean difference (Hedge's g), and heterogeneity across the considered studies was explored using subgroup analyses. The quality assessment was done using the Cochrane risk of bias tool, and sensitivity analysis was performed to check the influences on effect size by statistical models. After screening and assessment of eligibility, finally 10 studies were included for meta-analysis, which includes six studies on alcohol and four studies on nicotine use disorder. The random-model analysis revealed a pooled effect size of 0.75 (95% CI=0.29 to 1.21, p=0.001), whereas the fixed-model analysis showed a large effect size of 0.87 (95% CI=0.63 to 1.12, p<0.00001). Subgroup analysis for alcohol use disorder showed an effect size of -0.06 (95% CI=-0.89 to 0.77, p=0.88). In the case of nicotine use disorder, random-model analysis revealed an effect size of 1.00 (95% CI=0.48 to 1.55, p=0.0001), whereas fixed-model analysis also showed a large effect size of 0.96 (95% CI=0.71 to 1.22). The present meta-analysis identified a beneficial effect of high-frequency rTMS on craving associated with nicotine use disorder but not alcohol use disorder.

  11. Reliability of four models for clinical gait analysis.

    PubMed

    Kainz, Hans; Graham, David; Edwards, Julie; Walsh, Henry P J; Maine, Sheanna; Boyd, Roslyn N; Lloyd, David G; Modenese, Luca; Carty, Christopher P

    2017-05-01

    Three-dimensional gait analysis (3DGA) has become a common clinical tool for treatment planning in children with cerebral palsy (CP). Many clinical gait laboratories use the conventional gait analysis model (e.g. Plug-in-Gait model), which uses Direct Kinematics (DK) for joint kinematic calculations, whereas, musculoskeletal models, mainly used for research, use Inverse Kinematics (IK). Musculoskeletal IK models have the advantage of enabling additional analyses which might improve the clinical decision-making in children with CP. Before any new model can be used in a clinical setting, its reliability has to be evaluated and compared to a commonly used clinical gait model (e.g. Plug-in-Gait model) which was the purpose of this study. Two testers performed 3DGA in eleven CP and seven typically developing participants on two occasions. Intra- and inter-tester standard deviations (SD) and standard error of measurement (SEM) were used to compare the reliability of two DK models (Plug-in-Gait and a six degrees-of-freedom model solved using Vicon software) and two IK models (two modifications of 'gait2392' solved using OpenSim). All models showed good reliability (mean SEM of 3.0° over all analysed models and joint angles). Variations in joint kinetics were less in typically developed than in CP participants. The modified 'gait2392' model which included all the joint rotations commonly reported in clinical 3DGA, showed reasonable reliable joint kinematic and kinetic estimates, and allows additional musculoskeletal analysis on surgically adjustable parameters, e.g. muscle-tendon lengths, and, therefore, is a suitable model for clinical gait analysis. Copyright © 2017. Published by Elsevier B.V.

  12. A Graphical Analysis of the Cournot-Nash and Stackelberg Models.

    ERIC Educational Resources Information Center

    Fulton, Murray

    1997-01-01

    Shows how the Cournot-Nash and Stackelberg equilibria can be represented in the familiar supply-demand graphical framework, allowing a direct comparison with the monopoly, competitive, and industrial organization models. This graphical analysis is represented throughout the article. (MJP)

  13. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    NASA Astrophysics Data System (ADS)

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard deviation and average value. Numerical classification produced 5 models, which was divided into three zones are sustainable neighbourhood, buffer and converted in Denpasar and Badung. The results of Population curve parameter analysis in Denpasar showed normal curve, in contrast to the Badung regency showed abnormal curve, therefore Denpasar modeling carried out throughout the region, while in the Badung regency modeling done in each district. Relationship modelling and projections lands role in food balance in Badung views of sustainable land area whereas in Denpasar seen from any connection to the green open spaces in the spatial plan Denpasar 2011-2031. Modelling in Badung (rural) is different in Denpasar (urban), as well as population curve parameter analysis results in Badung showed abnormal curve while in Denpasar showed normal curve. Relationship modelling and projections lands role in food balance in the Badung regency sustainable in terms of land area, while in Denpasar in terms of linkages with urban green space in Denpasar City’s regional landuse plan of 2011-2031.

  14. Impact of model-based risk analysis for liver surgery planning.

    PubMed

    Hansen, C; Zidowitz, S; Preim, B; Stavrou, G; Oldhafer, K J; Hahn, H K

    2014-05-01

    A model-based risk analysis for oncologic liver surgery was described in previous work (Preim et al. in Proceedings of international symposium on computer assisted radiology and surgery (CARS), Elsevier, Amsterdam, pp. 353–358, 2002; Hansen et al. Int I Comput Assist Radiol Surg 4(5):469–474, 2009). In this paper, we present an evaluation of this method. To prove whether and how the risk analysis facilitates the process of liver surgery planning, an explorative user study with 10 liver experts was conducted. The purpose was to compare and analyze their decision-making. The results of the study show that model-based risk analysis enhances the awareness of surgical risk in the planning stage. Participants preferred smaller resection volumes and agreed more on the safety margins’ width in case the risk analysis was available. In addition, time to complete the planning task and confidence of participants were not increased when using the risk analysis. This work shows that the applied model-based risk analysis may influence important planning decisions in liver surgery. It lays a basis for further clinical evaluations and points out important fields for future research.

  15. Predicting Air Permeability of Handloom Fabrics: A Comparative Analysis of Regression and Artificial Neural Network Models

    NASA Astrophysics Data System (ADS)

    Mitra, Ashis; Majumdar, Prabal Kumar; Bannerjee, Debamalya

    2013-03-01

    This paper presents a comparative analysis of two modeling methodologies for the prediction of air permeability of plain woven handloom cotton fabrics. Four basic fabric constructional parameters namely ends per inch, picks per inch, warp count and weft count have been used as inputs for artificial neural network (ANN) and regression models. Out of the four regression models tried, interaction model showed very good prediction performance with a meager mean absolute error of 2.017 %. However, ANN models demonstrated superiority over the regression models both in terms of correlation coefficient and mean absolute error. The ANN model with 10 nodes in the single hidden layer showed very good correlation coefficient of 0.982 and 0.929 and mean absolute error of only 0.923 and 2.043 % for training and testing data respectively.

  16. Student Background, School Climate, School Disorder, and Student Achievement: An Empirical Study of New York City's Middle Schools

    ERIC Educational Resources Information Center

    Chen, Greg; Weikart, Lynne A.

    2008-01-01

    This study develops and tests a school disorder and student achievement model based upon the school climate framework. The model was fitted to 212 New York City middle schools using the Structural Equations Modeling Analysis method. The analysis shows that the model fits the data well based upon test statistics and goodness of fit indices. The…

  17. Stability and Bifurcation Analysis of a Three-Species Food Chain Model with Delay

    NASA Astrophysics Data System (ADS)

    Pal, Nikhil; Samanta, Sudip; Biswas, Santanu; Alquran, Marwan; Al-Khaled, Kamel; Chattopadhyay, Joydev

    In the present paper, we study the effect of gestation delay on a tri-trophic food chain model with Holling type-II functional response. The essential mathematical features of the proposed model are analyzed with the help of equilibrium analysis, stability analysis, and bifurcation theory. Considering time-delay as the bifurcation parameter, the Hopf-bifurcation analysis is carried out around the coexisting equilibrium. The direction of Hopf-bifurcation and the stability of the bifurcating periodic solutions are determined by applying the normal form theory and center manifold theorem. We observe that if the magnitude of the delay is increased, the system loses stability and shows limit cycle oscillations through Hopf-bifurcation. The system also shows the chaotic dynamics via period-doubling bifurcation for further enhancement of time-delay. Our analytical findings are illustrated through numerical simulations.

  18. Water quality modeling for urban reach of Yamuna river, India (1999-2009), using QUAL2Kw

    NASA Astrophysics Data System (ADS)

    Sharma, Deepshikha; Kansal, Arun; Pelletier, Greg

    2017-06-01

    The study was to characterize and understand the water quality of the river Yamuna in Delhi (India) prior to an efficient restoration plan. A combination of collection of monitored data, mathematical modeling, sensitivity, and uncertainty analysis has been done using the QUAL2Kw, a river quality model. The model was applied to simulate DO, BOD, total coliform, and total nitrogen at four monitoring stations, namely Palla, Old Delhi Railway Bridge, Nizamuddin, and Okhla for 10 years (October 1999-June 2009) excluding the monsoon seasons (July-September). The study period was divided into two parts: monthly average data from October 1999-June 2004 (45 months) were used to calibrate the model and monthly average data from October 2005-June 2009 (45 months) were used to validate the model. The R2 for CBODf and TN lies within the range of 0.53-0.75 and 0.68-0.83, respectively. This shows that the model has given satisfactory results in terms of R2 for CBODf, TN, and TC. Sensitivity analysis showed that DO, CBODf, TN, and TC predictions are highly sensitive toward headwater flow and point source flow and quality. Uncertainty analysis using Monte Carlo showed that the input data have been simulated in accordance with the prevalent river conditions.

  19. Assessing School Work Culture: A Higher-Order Analysis and Strategy.

    ERIC Educational Resources Information Center

    Johnson, William L.; Johnson, Annabel M.; Zimmerman, Kurt J.

    This paper reviews a work culture productivity model and reports the development of a work culture instrument based on the culture productivity model. Higher order principal components analysis was used to assess work culture, and a third-order factor analysis shows how the first-order factors group into higher-order factors. The school work…

  20. A brain-region-based meta-analysis method utilizing the Apriori algorithm.

    PubMed

    Niu, Zhendong; Nie, Yaoxin; Zhou, Qian; Zhu, Linlin; Wei, Jieyao

    2016-05-18

    Brain network connectivity modeling is a crucial method for studying the brain's cognitive functions. Meta-analyses can unearth reliable results from individual studies. Meta-analytic connectivity modeling is a connectivity analysis method based on regions of interest (ROIs) which showed that meta-analyses could be used to discover brain network connectivity. In this paper, we propose a new meta-analysis method that can be used to find network connectivity models based on the Apriori algorithm, which has the potential to derive brain network connectivity models from activation information in the literature, without requiring ROIs. This method first extracts activation information from experimental studies that use cognitive tasks of the same category, and then maps the activation information to corresponding brain areas by using the automatic anatomical label atlas, after which the activation rate of these brain areas is calculated. Finally, using these brain areas, a potential brain network connectivity model is calculated based on the Apriori algorithm. The present study used this method to conduct a mining analysis on the citations in a language review article by Price (Neuroimage 62(2):816-847, 2012). The results showed that the obtained network connectivity model was consistent with that reported by Price. The proposed method is helpful to find brain network connectivity by mining the co-activation relationships among brain regions. Furthermore, results of the co-activation relationship analysis can be used as a priori knowledge for the corresponding dynamic causal modeling analysis, possibly achieving a significant dimension-reducing effect, thus increasing the efficiency of the dynamic causal modeling analysis.

  1. [Simulation and data analysis of stereological modeling based on virtual slices].

    PubMed

    Wang, Hao; Shen, Hong; Bai, Xiao-yan

    2008-05-01

    To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.

  2. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: Boeing Helicopters airframe finite element modeling

    NASA Technical Reports Server (NTRS)

    Gabel, R.; Lang, P.; Reed, D.

    1993-01-01

    Mathematical models based on the finite element method of structural analysis, as embodied in the NASTRAN computer code, are routinely used by the helicopter industry to calculate airframe static internal loads used for sizing structural members. Historically, less reliance has been placed on the vibration predictions based on these models. Beginning in the early 1980's NASA's Langley Research Center initiated an industry wide program with the objective of engendering the needed trust in vibration predictions using these models and establishing a body of modeling guides which would enable confident future prediction of airframe vibration as part of the regular design process. Emphasis in this paper is placed on the successful modeling of the Army/Boeing CH-47D which showed reasonable correlation with test data. A principal finding indicates that improved dynamic analysis requires greater attention to detail and perhaps a finer mesh, especially the mass distribution, than the usual stress model. Post program modeling efforts show improved correlation placing key modal frequencies in the b/rev range with 4 percent of the test frequencies.

  3. Evaluation of two models for predicting elemental accumulation by arthropods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webster, J.R.; Crossley, D.A. Jr.

    1978-06-15

    Two different models have been proposed for predicting elemental accumulation by arthropods. Parameters of both models can be quantified from radioisotope elimination experiments. Our analysis of the 2 models shows that both predict identical elemental accumulation for a whole organism, though differing in the accumulation in body and gut. We quantified both models with experimental data from /sup 134/Cs and /sup 85/Sr elimination by crickets. Computer simulations of radioisotope accumulation were then compared with actual accumulation experiments. Neither model showed exact fit to the experimental data, though both showed the general pattern of elemental accumulation.

  4. A regional scale modeling framework combining biogeochemical model with life cycle and economic analysis for integrated assessment of cropping systems.

    PubMed

    Tabatabaie, Seyed Mohammad Hossein; Bolte, John P; Murthy, Ganti S

    2018-06-01

    The goal of this study was to integrate a crop model, DNDC (DeNitrification-DeComposition), with life cycle assessment (LCA) and economic analysis models using a GIS-based integrated platform, ENVISION. The integrated model enables LCA practitioners to conduct integrated economic analysis and LCA on a regional scale while capturing the variability of soil emissions due to variation in regional factors during production of crops and biofuel feedstocks. In order to evaluate the integrated model, the corn-soybean cropping system in Eagle Creek Watershed, Indiana was studied and the integrated model was used to first model the soil emissions and then conduct the LCA as well as economic analysis. The results showed that the variation in soil emissions due to variation in weather is high causing some locations to be carbon sink in some years and source of CO 2 in other years. In order to test the model under different scenarios, two tillage scenarios were defined: 1) conventional tillage (CT) and 2) no tillage (NT) and analyzed with the model. The overall GHG emissions for the corn-soybean cropping system was simulated and results showed that the NT scenario resulted in lower soil GHG emissions compared to CT scenario. Moreover, global warming potential (GWP) of corn ethanol from well to pump varied between 57 and 92gCO 2 -eq./MJ while GWP under the NT system was lower than that of the CT system. The cost break-even point was calculated as $3612.5/ha in a two year corn-soybean cropping system and the results showed that under low and medium prices for corn and soybean most of the farms did not meet the break-even point. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Dynamical system analysis of interacting models

    NASA Astrophysics Data System (ADS)

    Carneiro, S.; Borges, H. A.

    2018-01-01

    We perform a dynamical system analysis of a cosmological model with linear dependence between the vacuum density and the Hubble parameter, with constant-rate creation of dark matter. We show that the de Sitter spacetime is an asymptotically stable critical point, future limit of any expanding solution. Our analysis also shows that the Minkowski spacetime is an unstable critical point, which eventually collapses to a singularity. In this way, such a prescription for the vacuum decay not only predicts the correct future de Sitter limit, but also forbids the existence of a stable Minkowski universe. We also study the effect of matter creation on the growth of structures and their peculiar velocities, showing that it is inside the current errors of redshift space distortions observations.

  6. Nonlinear analysis of AS4/PEEK thermoplastic composite laminate using a one parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1990-01-01

    A one-parameter plasticity model was shown to adequately describe the orthotropic plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The nonlinear stress-strain relations were measured and compared with those predicted by the finite element analysis using the one-parameter elastic-plastic constitutive model. The results show that the one-parameter orthotropic plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  7. Elastic-plastic analysis of AS4/PEEK composite laminate using a one-parameter plasticity model

    NASA Technical Reports Server (NTRS)

    Sun, C. T.; Yoon, K. J.

    1992-01-01

    A one-parameter plasticity model was shown to adequately describe the plastic deformation of AS4/PEEK (APC-2) unidirectional thermoplastic composite. This model was verified further for unidirectional and laminated composite panels with and without a hole. The elastic-plastic stress-strain relations of coupon specimens were measured and compared with those predicted by the finite element analysis using the one-parameter plasticity model. The results show that the one-parameter plasticity model is suitable for the analysis of elastic-plastic deformation of AS4/PEEK composite laminates.

  8. Structural Analysis of the Redesigned Ice/Frost Ramp Bracket

    NASA Technical Reports Server (NTRS)

    Phillips, D. R.; Dawicke, D. S.; Gentz, S. J.; Roberts, P. W.; Raju, I. S.

    2007-01-01

    This paper describes the interim structural analysis of a redesigned Ice/Frost Ramp bracket for the Space Shuttle External Tank (ET). The proposed redesigned bracket consists of mounts for attachment to the ET wall, supports for the electronic/instrument cables and propellant repressurization lines that run along the ET, an upper plate, a lower plate, and complex bolted connections. The eight nominal bolted connections are considered critical in the summarized structural analysis. Each bolted connection contains a bolt, a nut, four washers, and a non-metallic spacer and block that are designed for thermal insulation. A three-dimensional (3D) finite element model of the bracket is developed using solid 10-node tetrahedral elements. The loading provided by the ET Project is used in the analysis. Because of the complexities associated with accurately modeling the bolted connections in the bracket, the analysis is performed using a global/local analysis procedure. The finite element analysis of the bracket identifies one of the eight bolted connections as having high stress concentrations. A local area of the bracket surrounding this bolted connection is extracted from the global model and used as a local model. Within the local model, the various components of the bolted connection are refined, and contact is introduced along the appropriate interfaces determined by the analysts. The deformations from the global model are applied as boundary conditions to the local model. The results from the global/local analysis show that while the stresses in the bolts are well within yield, the spacers fail due to compression. The primary objective of the interim structural analysis is to show concept viability for static thermal testing. The proposed design concept would undergo continued design optimization to address the identified analytical assumptions and concept shortcomings, assuming successful thermal testing.

  9. Molecular docking and 3D-QSAR studies on inhibitors of DNA damage signaling enzyme human PARP-1.

    PubMed

    Fatima, Sabiha; Bathini, Raju; Sivan, Sree Kanth; Manga, Vijjulatha

    2012-08-01

    Poly (ADP-ribose) polymerase-1 (PARP-1) operates in a DNA damage signaling network. Molecular docking and three dimensional-quantitative structure activity relationship (3D-QSAR) studies were performed on human PARP-1 inhibitors. Docked conformation obtained for each molecule was used as such for 3D-QSAR analysis. Molecules were divided into a training set and a test set randomly in four different ways, partial least square analysis was performed to obtain QSAR models using the comparative molecular field analysis (CoMFA) and comparative molecular similarity indices analysis (CoMSIA). Derived models showed good statistical reliability that is evident from their r², q²(loo) and r²(pred) values. To obtain a consensus for predictive ability from all the models, average regression coefficient r²(avg) was calculated. CoMFA and CoMSIA models showed a value of 0.930 and 0.936, respectively. Information obtained from the best 3D-QSAR model was applied for optimization of lead molecule and design of novel potential inhibitors.

  10. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  11. Some dynamical aspects of interacting quintessence model

    NASA Astrophysics Data System (ADS)

    Choudhury, Binayak S.; Mondal, Himadri Shekhar; Chatterjee, Devosmita

    2018-04-01

    In this paper, we consider a particular form of coupling, namely B=σ (\\dot{ρ _m}-\\dot{ρ _φ }) in spatially flat (k=0) Friedmann-Lemaitre-Robertson-Walker (FLRW) space-time. We perform phase-space analysis for this interacting quintessence (dark energy) and dark matter model for different numerical values of parameters. We also show the phase-space analysis for the `best-fit Universe' or concordance model. In our analysis, we observe the existence of late-time scaling attractors.

  12. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  13. Intelligent control system for continuous technological process of alkylation

    NASA Astrophysics Data System (ADS)

    Gebel, E. S.; Hakimov, R. A.

    2018-01-01

    Relevance of intelligent control for complex dynamic objects and processes are shown in this paper. The model of a virtual analyzer based on a neural network is proposed. Comparative analysis of mathematical models implemented in MathLab software showed that the most effective from the point of view of the reproducibility of the result is the model with seven neurons in the hidden layer, the training of which was performed using the method of scaled coupled gradients. Comparison of the data from the laboratory analysis and the theoretical model are showed that the root-mean-square error does not exceed 3.5, and the calculated value of the correlation coefficient corresponds to a "strong" connection between the values.

  14. Prediction models for clustered data: comparison of a random intercept and standard regression model

    PubMed Central

    2013-01-01

    Background When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Methods Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. Results The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. Conclusion The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters. PMID:23414436

  15. Prediction models for clustered data: comparison of a random intercept and standard regression model.

    PubMed

    Bouwmeester, Walter; Twisk, Jos W R; Kappen, Teus H; van Klei, Wilton A; Moons, Karel G M; Vergouwe, Yvonne

    2013-02-15

    When study data are clustered, standard regression analysis is considered inappropriate and analytical techniques for clustered data need to be used. For prediction research in which the interest of predictor effects is on the patient level, random effect regression models are probably preferred over standard regression analysis. It is well known that the random effect parameter estimates and the standard logistic regression parameter estimates are different. Here, we compared random effect and standard logistic regression models for their ability to provide accurate predictions. Using an empirical study on 1642 surgical patients at risk of postoperative nausea and vomiting, who were treated by one of 19 anesthesiologists (clusters), we developed prognostic models either with standard or random intercept logistic regression. External validity of these models was assessed in new patients from other anesthesiologists. We supported our results with simulation studies using intra-class correlation coefficients (ICC) of 5%, 15%, or 30%. Standard performance measures and measures adapted for the clustered data structure were estimated. The model developed with random effect analysis showed better discrimination than the standard approach, if the cluster effects were used for risk prediction (standard c-index of 0.69 versus 0.66). In the external validation set, both models showed similar discrimination (standard c-index 0.68 versus 0.67). The simulation study confirmed these results. For datasets with a high ICC (≥15%), model calibration was only adequate in external subjects, if the used performance measure assumed the same data structure as the model development method: standard calibration measures showed good calibration for the standard developed model, calibration measures adapting the clustered data structure showed good calibration for the prediction model with random intercept. The models with random intercept discriminate better than the standard model only if the cluster effect is used for predictions. The prediction model with random intercept had good calibration within clusters.

  16. Considering Horn's Parallel Analysis from a Random Matrix Theory Point of View.

    PubMed

    Saccenti, Edoardo; Timmerman, Marieke E

    2017-03-01

    Horn's parallel analysis is a widely used method for assessing the number of principal components and common factors. We discuss the theoretical foundations of parallel analysis for principal components based on a covariance matrix by making use of arguments from random matrix theory. In particular, we show that (i) for the first component, parallel analysis is an inferential method equivalent to the Tracy-Widom test, (ii) its use to test high-order eigenvalues is equivalent to the use of the joint distribution of the eigenvalues, and thus should be discouraged, and (iii) a formal test for higher-order components can be obtained based on a Tracy-Widom approximation. We illustrate the performance of the two testing procedures using simulated data generated under both a principal component model and a common factors model. For the principal component model, the Tracy-Widom test performs consistently in all conditions, while parallel analysis shows unpredictable behavior for higher-order components. For the common factor model, including major and minor factors, both procedures are heuristic approaches, with variable performance. We conclude that the Tracy-Widom procedure is preferred over parallel analysis for statistically testing the number of principal components based on a covariance matrix.

  17. Spatiotemporal pH dynamics in concentration polarization near ion-selective membranes.

    PubMed

    Andersen, Mathias B; Rogers, David M; Mai, Junyu; Schudel, Benjamin; Hatch, Anson V; Rempe, Susan B; Mani, Ali

    2014-07-08

    We present a detailed analysis of the transient pH dynamics for a weak, buffered electrolyte subject to voltage-driven transport through an ion-selective membrane. We show that pH fronts emanate from the concentration polarization zone next to the membrane and that these propagating fronts change the pH in the system several units from its equilibrium value. The analysis is based on a 1D model using the unsteady Poisson-Nernst-Planck equations with nonequilibrium chemistry and without assumptions of electroneutrality or asymptotically thin electric double layers. Nonequilibrium chemical effects, especially for water splitting, are shown to be important for the dynamical and spatiotemporal evolution of the pH fronts. Nonetheless, the model also shows that at steady state the assumption of chemical equilibrium can still lead to good approximations of the global pH distribution. Moreover, our model shows that the transport of the hydronium ion in the extended space charge region is governed by a balance between electromigration and water self-ionization. On the basis of this observation, we present a simple model showing that the net flux of the hydronium ion is proportional to the length of the extended space charge region and the water self-ionization rate. To demonstrate these effects in practice, we have adopted the experiment of Mai et al. (Mai, J.; Miller, H.; Hatch, A. V. Spatiotemporal Mapping of Concentration Polarization Induced pH Changes at Nanoconstrictions. ACS Nano 2012, 6, 10206) as a model problem, and by including the full chemistry and transport, we show that the present model can capture the experimentally observed pH fronts. Our model can, among other things, be used to predict and engineer pH dynamics, which can be essential to the performance of membrane-based systems for biochemical separation and analysis.

  18. Optimizing Energy Consumption in Building Designs Using Building Information Model (BIM)

    NASA Astrophysics Data System (ADS)

    Egwunatum, Samuel; Joseph-Akwara, Esther; Akaigwe, Richard

    2016-09-01

    Given the ability of a Building Information Model (BIM) to serve as a multi-disciplinary data repository, this paper seeks to explore and exploit the sustainability value of Building Information Modelling/models in delivering buildings that require less energy for their operation, emit less CO2 and at the same time provide a comfortable living environment for their occupants. This objective was achieved by a critical and extensive review of the literature covering: (1) building energy consumption, (2) building energy performance and analysis, and (3) building information modeling and energy assessment. The literature cited in this paper showed that linking an energy analysis tool with a BIM model helped project design teams to predict and create optimized energy consumption. To validate this finding, an in-depth analysis was carried out on a completed BIM integrated construction project using the Arboleda Project in the Dominican Republic. The findings showed that the BIM-based energy analysis helped the design team achieve the world's first 103% positive energy building. From the research findings, the paper concludes that linking an energy analysis tool with a BIM model helps to expedite the energy analysis process, provide more detailed and accurate results as well as deliver energy-efficient buildings. The study further recommends that the adoption of a level 2 BIM and the integration of BIM in energy optimization analyse should be made compulsory for all projects irrespective of the method of procurement (government-funded or otherwise) or its size.

  19. Failure analysis and modeling of a VAXcluster system

    NASA Technical Reports Server (NTRS)

    Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.

    1990-01-01

    This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.

  20. Identification of visual evoked response parameters sensitive to pilot mental state

    NASA Technical Reports Server (NTRS)

    Zacharias, G. L.

    1988-01-01

    Systems analysis techniques were developed and demonstrated for modeling the electroencephalographic (EEG) steady state visual evoked response (ssVER), for use in EEG data compression and as an indicator of mental workload. The study focused on steady state frequency domain stimulation and response analysis, implemented with a sum-of-sines (SOS) stimulus generator and an off-line describing function response analyzer. Three major tasks were conducted: (1) VER related systems identification material was reviewed; (2) Software for experiment control and data analysis was developed and implemented; and (3) ssVER identification and modeling was demonstrated, via a mental loading experiment. It was found that a systems approach to ssVER functional modeling can serve as the basis for eventual development of a mental workload indicator. The review showed how transient visual evoked response (tVER) and ssVER research are related at the functional level, the software development showed how systems techniques can be used for ssVER characterization, and the pilot experiment showed how a simple model can be used to capture the basic dynamic response of the ssVER, under varying loads.

  1. Quantitative assessment of myocardial blood flow in coronary artery disease by cardiovascular magnetic resonance: comparison of Fermi and distributed parameter modeling against invasive methods.

    PubMed

    Papanastasiou, Giorgos; Williams, Michelle C; Dweck, Marc R; Alam, Shirjel; Cooper, Annette; Mirsadraee, Saeed; Newby, David E; Semple, Scott I

    2016-09-13

    Mathematical modeling of perfusion cardiovascular magnetic resonance (CMR) data allows absolute quantification of myocardial blood flow and can potentially improve the diagnosis and prognostication of obstructive coronary artery disease (CAD), against the current clinical standard of visual assessments. This study compares the diagnostic performance of distributed parameter modeling (DP) against the standard Fermi model, for the detection of obstructive CAD, in per vessel against per patient analysis. A pilot cohort of 28 subjects (24 included in the final analysis) with known or suspected CAD underwent adenosine stress-rest perfusion CMR at 3T. Data were analysed using Fermi and DP modeling against invasive coronary angiography and fractional flow reserve, acquired in all subjects. Obstructive CAD was defined as luminal stenosis of ≥70 % alone, or luminal stenosis ≥50 % and fractional flow reserve ≤0.80. On ROC analysis, DP modeling outperformed the standard Fermi model, in per vessel and per patient analysis. In per patient analysis, DP modeling-derived myocardial blood flow at stress demonstrated the highest sensitivity and specificity (0.96, 0.92) in detecting obstructive CAD, against Fermi modeling (0.78, 0.88) and visual assessments (0.79, 0.88), respectively. DP modeling demonstrated consistently increased diagnostic performance against Fermi modeling and showed that it may have merit for stratifying patients with at least one vessel with obstructive CAD. Clinicaltrials.gov NCT01368237 Registered 6 of June 2011. URL: https://clinicaltrials.gov/ct2/show/NCT01368237.

  2. Analysis of perceived risk among construction workers: a cross-cultural study and reflection on the Hofstede model.

    PubMed

    Martinez-Fiestas, Myriam; Rodríguez-Garzón, Ignacio; Delgado-Padial, Antonio; Lucas-Ruiz, Valeriano

    2017-09-01

    This article presents a cross-cultural study on perceived risk in the construction industry. Worker samples from three different countries were studied: Spain, Peru and Nicaragua. The main goal was to explain how construction workers perceive their occupational hazard and to analyze how this is related to their national culture. The model used to measure perceived risk was the psychometric paradigm. The results show three very similar profiles, indicating that risk perception is independent of nationality. A cultural analysis was conducted using the Hofstede model. The results of this analysis and the relation to perceived risk showed that risk perception in construction is independent of national culture. Finally, a multiple lineal regression analysis was conducted to determine what qualitative attributes could predict the global quantitative size of risk perception. All of the findings have important implications regarding the management of safety in the workplace.

  3. Analysis of vehicle's safety envelope under car-following model

    NASA Astrophysics Data System (ADS)

    Tang, Tie-Qiao; Zhang, Jian; Chen, Liang; Shang, Hua-Yan

    2017-05-01

    In this paper, we propose an improved car-following model to explore the impacts of vehicle's two safety distances (i.e., the front safety distance and back safety distance) on the traffic safety during the starting process. The numerical results show that our model is prominently safer than the FVD (full velocity difference) model, i.e., our model is better than the FVD model from the perspective of the traffic safety, which shows that each driver should consider his two safety distances during his driving process.

  4. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  5. Fretting Fatigue Analysis of Additively Manufactured Blade Root Made of Intermetallic Ti-48Al-2Cr-2Nb Alloy at High Temperature.

    PubMed

    Lavella, Mario; Botto, Daniele

    2018-06-21

    Slots in the disk of aircraft turbines restrain the centrifugal load of blades. Contact surfaces between the blade root and the disk slot undergo high contact pressure and relative displacement that is the typical condition in which fretting occurs. The load level ranges from zero to the maximum during take-off. This cycle is repeated for each mission. In this paper, a fretting fatigue analysis of additively manufactured blades is presented. Blades are made of an intermetallic alloy γTiAl. Fretting fatigue experiments were performed at a frequency of 0.5 Hz and at a temperature of 640 °C to match the operating condition of real blades. The minimum load was fixed at 0.5 KN and three maximum loads were applied, namely 16, 18 and 20 kN. Both an analytical and a two-dimensional finite element model were used to evaluate the state of stress at the contact interfaces. The results of the analytical model showed good agreement with the numerical model. Experiments showed that cracks nucleate where the analytical model predicts the maximum contact pressure and the numerical model predicts the maximum equivalent stress. A parametric analysis performed with the analytical model indicates that there exists an optimum geometry to minimize the contact pressure. Tests showed that the component life changed dramatically with the maximum load variation. Optical topography and scanning electron microscopy (SEM) analysis reveals information about the damage mechanism.

  6. Assessments of higher-order ionospheric effects on GPS coordinate time series: A case study of CMONOC with longer time series

    NASA Astrophysics Data System (ADS)

    Jiang, Weiping; Deng, Liansheng; Zhou, Xiaohui; Ma, Yifang

    2014-05-01

    Higher-order ionospheric (HIO) corrections are proposed to become a standard part for precise GPS data analysis. For this study, we deeply investigate the impacts of the HIO corrections on the coordinate time series by implementing re-processing of the GPS data from Crustal Movement Observation Network of China (CMONOC). Nearly 13 year data are used in our three processing runs: (a) run NO, without HOI corrections, (b) run IG, both second- and third-order corrections are modeled using the International Geomagnetic Reference Field 11 (IGRF11) to model the magnetic field, (c) run ID, the same with IG but dipole magnetic model are applied. Both spectral analysis and noise analysis are adopted to investigate these effects. Results show that for CMONOC stations, HIO corrections are found to have brought an overall improvement. After the corrections are applied, the noise amplitudes decrease, with the white noise amplitudes showing a more remarkable variation. Low-latitude sites are more affected. For different coordinate components, the impacts vary. The results of an analysis of stacked periodograms show that there is a good match between the seasonal amplitudes and the HOI corrections, and the observed variations in the coordinate time series are related to HOI effects. HOI delays partially explain the seasonal amplitudes in the coordinate time series, especially for the U component. The annual amplitudes for all components are decreased for over one-half of the selected CMONOC sites. Additionally, the semi-annual amplitudes for the sites are much more strongly affected by the corrections. However, when diplole model is used, the results are not as optimistic as IGRF model. Analysis of dipole model indicate that HIO delay lead to the increase of noise amplitudes, and that HIO delays with dipole model can generate false periodic signals. When dipole model are used in modeling HIO terms, larger residual and noise are brought in rather than the effective improvements.

  7. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  8. Loading Deformation Characteristic Simulation Study of Engineering Vehicle Refurbished Tire

    NASA Astrophysics Data System (ADS)

    Qiang, Wang; Xiaojie, Qi; Zhao, Yang; Yunlong, Wang; Guotian, Wang; Degang, Lv

    2018-05-01

    The paper constructed engineering vehicle refurbished tire computer geometry model, mechanics model, contact model, finite element analysis model, did simulation study on load-deformation property of engineering vehicle refurbished tire by comparing with that of the new and the same type tire, got load-deformation of engineering vehicle refurbished tire under the working condition of static state and ground contact. The analysis result shows that change rules of radial-direction deformation and side-direction deformation of engineering vehicle refurbished tire are close to that of the new tire, radial-direction and side-direction deformation value is a little less than that of the new tire. When air inflation pressure was certain, radial-direction deformation linear rule of engineer vehicle refurbished tire would increase with load adding, however, side-direction deformation showed linear change rule, when air inflation pressure was low; and it would show increase of non-linear change rule, when air inflation pressure was very high.

  9. Analysis of genetic effects of nuclear-cytoplasmic interaction on quantitative traits: genetic model for diploid plants.

    PubMed

    Han, Lide; Yang, Jian; Zhu, Jun

    2007-06-01

    A genetic model was proposed for simultaneously analyzing genetic effects of nuclear, cytoplasm, and nuclear-cytoplasmic interaction (NCI) as well as their genotype by environment (GE) interaction for quantitative traits of diploid plants. In the model, the NCI effects were further partitioned into additive and dominance nuclear-cytoplasmic interaction components. Mixed linear model approaches were used for statistical analysis. On the basis of diallel cross designs, Monte Carlo simulations showed that the genetic model was robust for estimating variance components under several situations without specific effects. Random genetic effects were predicted by an adjusted unbiased prediction (AUP) method. Data on four quantitative traits (boll number, lint percentage, fiber length, and micronaire) in Upland cotton (Gossypium hirsutum L.) were analyzed as a worked example to show the effectiveness of the model.

  10. Economic analysis and assessment of syngas production using a modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hakkwan; Parajuli, Prem B.; Yu, Fei

    Economic analysis and modeling are essential and important issues for the development of current feedstock and process technology for bio-gasification. The objective of this study was to develop an economic model and apply to predict the unit cost of syngas production from a micro-scale bio-gasification facility. An economic model was programmed in C++ computer programming language and developed using a parametric cost approach, which included processes to calculate the total capital costs and the total operating costs. The model used measured economic data from the bio-gasification facility at Mississippi State University. The modeling results showed that the unit cost ofmore » syngas production was $1.217 for a 60 Nm-3 h-1 capacity bio-gasifier. The operating cost was the major part of the total production cost. The equipment purchase cost and the labor cost were the largest part of the total capital cost and the total operating cost, respectively. Sensitivity analysis indicated that labor costs rank the top as followed by equipment cost, loan life, feedstock cost, interest rate, utility cost, and waste treatment cost. The unit cost of syngas production increased with the increase of all parameters with exception of loan life. The annual cost regarding equipment, labor, feedstock, waste treatment, and utility cost showed a linear relationship with percent changes, while loan life and annual interest rate showed a non-linear relationship. This study provides the useful information for economic analysis and assessment of the syngas production using a modeling approach.« less

  11. Combination of a Stressor-Response Model with a Conditional Probability Analysis Approach for Developing Candidate Criteria from MBSS

    EPA Science Inventory

    I show that a conditional probability analysis using a stressor-response model based on a logistic regression provides a useful approach for developing candidate water quality criteria from empirical data, such as the Maryland Biological Streams Survey (MBSS) data.

  12. Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality

    NASA Astrophysics Data System (ADS)

    Pratama, Mega Aria; Rosyidi, Cucuk Nur

    2017-11-01

    This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.

  13. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  14. Constraints based analysis of extended cybernetic models.

    PubMed

    Mandli, Aravinda R; Venkatesh, Kareenhalli V; Modak, Jayant M

    2015-11-01

    The cybernetic modeling framework provides an interesting approach to model the regulatory phenomena occurring in microorganisms. In the present work, we adopt a constraints based approach to analyze the nonlinear behavior of the extended equations of the cybernetic model. We first show that the cybernetic model exhibits linear growth behavior under the constraint of no resource allocation for the induction of the key enzyme. We then quantify the maximum achievable specific growth rate of microorganisms on mixtures of substitutable substrates under various kinds of regulation and show its use in gaining an understanding of the regulatory strategies of microorganisms. Finally, we show that Saccharomyces cerevisiae exhibits suboptimal dynamic growth with a long diauxic lag phase when growing on a mixture of glucose and galactose and discuss on its potential to achieve optimal growth with a significantly reduced diauxic lag period. The analysis carried out in the present study illustrates the utility of adopting a constraints based approach to understand the dynamic growth strategies of microorganisms. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  15. Analysis of the quench propagation along Nb3Sn Rutherford cables with the THELMA code. Part I: Geometric and thermal models

    NASA Astrophysics Data System (ADS)

    Manfreda, G.; Bellina, F.

    2016-12-01

    The paper describes the new lumped thermal model recently implemented in THELMA code for the coupled electromagnetic-thermal analysis of superconducting cables. A new geometrical model is also presented, which describes the Rutherford cables used for the accelerator magnets. A first validation of these models has been given by the analysis of the quench longitudinal propagation velocity in the Nb3Sn prototype coil SMC3, built and tested in the frame of the EUCARD project for the development of high field magnets for LHC machine. This paper shows in detail the models, while their application to the quench propagation analysis is presented in a companion paper.

  16. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  17. Investigating How German Biology Teachers Use Three-Dimensional Physical Models in Classroom Instruction: a Video Study

    NASA Astrophysics Data System (ADS)

    Werner, Sonja; Förtsch, Christian; Boone, William; von Kotzebue, Lena; Neuhaus, Birgit J.

    2017-07-01

    To obtain a general understanding of science, model use as part of National Education Standards is important for instruction. Model use can be characterized by three aspects: (1) the characteristics of the model, (2) the integration of the model into instruction, and (3) the use of models to foster scientific reasoning. However, there were no empirical results describing the implementation of National Education Standards in science instruction concerning the use of models. Therefore, the present study investigated the implementation of different aspects of model use in German biology instruction. Two biology lessons on the topic neurobiology in grade nine of 32 biology teachers were videotaped (N = 64 videos). These lessons were analysed using an event-based coding manual according to three aspects of model described above. Rasch analysis of the coded categories was conducted and showed reliable measurement. In the first analysis, we identified 68 lessons where a total of 112 different models were used. The in-depth analysis showed that special aspects of an elaborate model use according to several categories of scientific reasoning were rarely implemented in biology instruction. A critical reflection of the used model (N = 25 models; 22.3%) and models to demonstrate scientific reasoning (N = 26 models; 23.2%) were seldom observed. Our findings suggest that pre-service biology teacher education and professional development initiatives in Germany have to focus on both aspects.

  18. Using the PLUM procedure of SPSS to fit unequal variance and generalized signal detection models.

    PubMed

    DeCarlo, Lawrence T

    2003-02-01

    The recent addition of aprocedure in SPSS for the analysis of ordinal regression models offers a simple means for researchers to fit the unequal variance normal signal detection model and other extended signal detection models. The present article shows how to implement the analysis and how to interpret the SPSS output. Examples of fitting the unequal variance normal model and other generalized signal detection models are given. The approach offers a convenient means for applying signal detection theory to a variety of research.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  20. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials

    PubMed Central

    Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2016-01-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group. PMID:27177885

  1. Missing continuous outcomes under covariate dependent missingness in cluster randomised trials.

    PubMed

    Hossain, Anower; Diaz-Ordaz, Karla; Bartlett, Jonathan W

    2017-06-01

    Attrition is a common occurrence in cluster randomised trials which leads to missing outcome data. Two approaches for analysing such trials are cluster-level analysis and individual-level analysis. This paper compares the performance of unadjusted cluster-level analysis, baseline covariate adjusted cluster-level analysis and linear mixed model analysis, under baseline covariate dependent missingness in continuous outcomes, in terms of bias, average estimated standard error and coverage probability. The methods of complete records analysis and multiple imputation are used to handle the missing outcome data. We considered four scenarios, with the missingness mechanism and baseline covariate effect on outcome either the same or different between intervention groups. We show that both unadjusted cluster-level analysis and baseline covariate adjusted cluster-level analysis give unbiased estimates of the intervention effect only if both intervention groups have the same missingness mechanisms and there is no interaction between baseline covariate and intervention group. Linear mixed model and multiple imputation give unbiased estimates under all four considered scenarios, provided that an interaction of intervention and baseline covariate is included in the model when appropriate. Cluster mean imputation has been proposed as a valid approach for handling missing outcomes in cluster randomised trials. We show that cluster mean imputation only gives unbiased estimates when missingness mechanism is the same between the intervention groups and there is no interaction between baseline covariate and intervention group. Multiple imputation shows overcoverage for small number of clusters in each intervention group.

  2. A Development of Nonstationary Regional Frequency Analysis Model with Large-scale Climate Information: Its Application to Korean Watershed

    NASA Astrophysics Data System (ADS)

    Kim, Jin-Young; Kwon, Hyun-Han; Kim, Hung-Soo

    2015-04-01

    The existing regional frequency analysis has disadvantages in that it is difficult to consider geographical characteristics in estimating areal rainfall. In this regard, this study aims to develop a hierarchical Bayesian model based nonstationary regional frequency analysis in that spatial patterns of the design rainfall with geographical information (e.g. latitude, longitude and altitude) are explicitly incorporated. This study assumes that the parameters of Gumbel (or GEV distribution) are a function of geographical characteristics within a general linear regression framework. Posterior distribution of the regression parameters are estimated by Bayesian Markov Chain Monte Carlo (MCMC) method, and the identified functional relationship is used to spatially interpolate the parameters of the distributions by using digital elevation models (DEM) as inputs. The proposed model is applied to derive design rainfalls over the entire Han-river watershed. It was found that the proposed Bayesian regional frequency analysis model showed similar results compared to L-moment based regional frequency analysis. In addition, the model showed an advantage in terms of quantifying uncertainty of the design rainfall and estimating the area rainfall considering geographical information. Finally, comprehensive discussion on design rainfall in the context of nonstationary will be presented. KEYWORDS: Regional frequency analysis, Nonstationary, Spatial information, Bayesian Acknowledgement This research was supported by a grant (14AWMP-B082564-01) from Advanced Water Management Research Program funded by Ministry of Land, Infrastructure and Transport of Korean government.

  3. An IR Sounding-Based Analysis of the Saharan Air Layer in North Africa

    NASA Technical Reports Server (NTRS)

    Nicholls, Stephen D.; Mohr, Karen I.

    2018-01-01

    Intense daytime surface heating over barren-to-sparsely vegetated surfaces results in dry convective mixing. In the absence of external forcing such as mountain waves, the dry convection can produce a deep, well-mixed, nearly isentropic boundary layer that becomes a well-mixed residual layer in the evening. These well-mixed layers (WML) retain their unique mid-tropospheric thermal and humidity structure for several days. To detect the SAL and characterize its properties, AIRS Level 2 Ver. 6 temperature and humidity products (2003-Present) are evaluated against rawinsondes and compared to model analysis at each of the 55 rawinsonde stations in northern Africa. To distinguish WML from Saharan air layers (WMLs of Saharan origin), the detection involved a two-step process: 1) algorithm-based detection of WMLs in dry environments (less than 7 g per kilogram mixing ratio) 2) identification of Sahara air layers (SAL) by applying Hybrid Single Particle Lagrangian Integrated Trajectory (HYSPLIT) back trajectories to determine the history of each WML. WML occurrence rates from AIRS closely resemble that from rawinsondes, yet rates from model analysis were up to 30% higher than observations in the Sahara due to model errors. Despite the overly frequent occurrence of WMLs from model analysis, HYSPLIT trajectory analysis showed that SAL occurrence rates (given a WML exists) from rawinsondes, AIRS, and model analysis were nearly identical. Although the number of WMLs varied among the data sources, the proportion of WMLs which were classified as SAL was nearly the same. The analysis of SAL bulk properties showed that AIRS and model analysis exhibited a slight warm and moist bias relative to rawinsondes in non-Saharan locations, but model analysis was notably warmer than rawinsondes and AIRS within the Sahara. The latter result is likely associated with the dearth of available data assimilated by model analysis in the Sahara. The variability of SAL thicknesses was reasonably captured by both AIRS and model analysis, but the former favor layers than are thinner than observations. Finally, further analysis of HYSPLIT trajectories revealed that fewer than 10% and 33% of all SAL back trajectories passed through regions with notable precipitation (>100 mm accumulated along the trajectory path) or Aerosol Optical Depth (AOD greater than 0.4, 75th percentile of AOD) on average, respectively. Trajectory analysis indicated that only 57% of Saharan and 24% of non-Saharan WMLs are definitively of Saharan origin (Saharan requirement: Two consecutive days in Sahara and 24 or more of those hours within 72 hours of detection). Non-SAL WMLs either originate from local-to-regionally generated residual layers or from mid-latitude air streams that do not linger over the Sahara for a sufficient time period. Initial analysis shows these non-SAL WMLs tend to be both notably cooler and slightly moister than their SAL counter parts. Continuing analysis will address what role Saharan and non-Saharan air masses characteristics may play on local and regional environmental conditions.

  4. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A. F.; Jacobs, C. S.

    2011-01-01

    The standard VLBI analysis models measurement noise as purely thermal errors modeled according to uncorrelated Gaussian distributions. As the price of recording bits steadily decreases, thermal errors will soon no longer dominate. It is therefore expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become more relevant for optimal analysis. This paper will discuss the advantages of including the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen ow model pioneered by Treuhaft and Lanyi. We will show examples of applying these correlated noise spectra to the weighting of VLBI data analysis.

  5. Study of vibrational spectra and hydrogen bonding network in dimeric and tetrameric model of ampicillin using DFT and AIM approach

    NASA Astrophysics Data System (ADS)

    Shukla, Anuradha; Khan, Eram; Tandon, Poonam; Sinha, Kirti

    2017-03-01

    Ampicillin is a β-lactam antibiotic that is active against both gram-positive and gram-negative bacteria and is widely used for the treatment of infections. In this work, molecular properties of ampicillin are calculated on the basis of calculations on its dimeric and tetrameric models using DFT/B3LYP/6-311G(d,p). HOMO-LUMO energy gap shows that chemical reactivity of tetrameric model of ampicillin is higher than the dimeric and monomeric model of ampicillin. To get a better understanding of intra and intermolecular bonding and interactions among bonds, NBO analysis is carried out with tetrameric model of ampicillin, and is further finalized with an 'quantum theory of atoms-in-molecules' (QTAIM) analysis. The binding energy of dimeric model of ampicillin is calculated as -26.84 kcal/mol and -29.34 kcal/mol using AIM and DFT calculations respectively. The global electrophilicity index (ω = 2.8118 eV) of tetrameric model of ampicillin shows that this behaves as a strong electrophile in comparison to dimeric and monomeric model of ampicillin. The FT-Raman and FT-IR spectra were recorded in the solid phase, and interpreted in terms of potential energy distribution analysis. A collective theoretical and experimental vibrational analysis approves the presence of hydrogen bonds in the ampicillin molecule.

  6. Rhetorical and Linguistic Analysis of Bush's Second Inaugural Speech

    ERIC Educational Resources Information Center

    Sameer, Imad Hayif

    2017-01-01

    This study attempts to analyze Bush's second inaugural speech. It aims at investigating the use of linguistic strategies in it. It resorts to two models which are Aristotle's model while the second is that of Atkinson's (1984) to draw the attention towards linguistic strategies. The analysis shows that Bush's second inaugural speech is successful…

  7. Multilevel Structural Equation Models for the Analysis of Comparative Data on Educational Performance

    ERIC Educational Resources Information Center

    Goldstein, Harvey; Bonnet, Gerard; Rocher, Thierry

    2007-01-01

    The Programme for International Student Assessment comparative study of reading performance among 15-year-olds is reanalyzed using statistical procedures that allow the full complexity of the data structures to be explored. The article extends existing multilevel factor analysis and structural equation models and shows how this can extract richer…

  8. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  9. Two-field analysis of no-scale supergravity inflation

    DOE PAGES

    Ellis, John; Garcia, Marcos A. G.; Nanopoulos, Dimitri V.; ...

    2015-01-08

    Since the building-blocks of supersymmetric models include chiral superfields containing pairs of effective scalar fields, a two-field approach is particularly appropriate for models of inflation based on supergravity. In this paper, we generalize the two-field analysis of the inflationary power spectrum to supergravity models with arbitrary Kähler potential. We show how two-field effects in the context of no-scale supergravity can alter the model predictions for the scalar spectral index n s and the tensor-to-scalar ratio r, yielding results that interpolate between the Planck-friendly Starobinsky model and BICEP2-friendly predictions. In particular, we show that two-field effects in a chaotic no-scale inflationmore » model with a quadratic potential are capable of reducing r to very small values << 0.1. Here, we also calculate the non-Gaussianity measure f NL, finding that is well below the current experimental sensitivity.« less

  10. Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Elrod, David Alan

    1988-01-01

    The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.

  11. Non-Contact Heart Rate and Blood Pressure Estimations from Video Analysis and Machine Learning Modelling Applied to Food Sensory Responses: A Case Study for Chocolate.

    PubMed

    Gonzalez Viejo, Claudia; Fuentes, Sigfredo; Torrico, Damir D; Dunshea, Frank R

    2018-06-03

    Traditional methods to assess heart rate (HR) and blood pressure (BP) are intrusive and can affect results in sensory analysis of food as participants are aware of the sensors. This paper aims to validate a non-contact method to measure HR using the photoplethysmography (PPG) technique and to develop models to predict the real HR and BP based on raw video analysis (RVA) with an example application in chocolate consumption using machine learning (ML). The RVA used a computer vision algorithm based on luminosity changes on the different RGB color channels using three face-regions (forehead and both cheeks). To validate the proposed method and ML models, a home oscillometric monitor and a finger sensor were used. Results showed high correlations with the G color channel (R² = 0.83). Two ML models were developed using three face-regions: (i) Model 1 to predict HR and BP using the RVA outputs with R = 0.85 and (ii) Model 2 based on time-series prediction with HR, magnitude and luminosity from RVA inputs to HR values every second with R = 0.97. An application for the sensory analysis of chocolate showed significant correlations between changes in HR and BP with chocolate hardness and purchase intention.

  12. A Bayesian analysis of inflationary primordial spectrum models using Planck data

    NASA Astrophysics Data System (ADS)

    Santos da Costa, Simony; Benetti, Micol; Alcaniz, Jailson

    2018-03-01

    The current available Cosmic Microwave Background (CMB) data show an anomalously low value of the CMB temperature fluctuations at large angular scales (l < 40). This lack of power is not explained by the minimal ΛCDM model, and one of the possible mechanisms explored in the literature to address this problem is the presence of features in the primordial power spectrum (PPS) motivated by the early universe physics. In this paper, we analyse a set of cutoff inflationary PPS models using a Bayesian model comparison approach in light of the latest CMB data from the Planck Collaboration. Our results show that the standard power-law parameterisation is preferred over all models considered in the analysis, which motivates the search for alternative explanations for the observed lack of power in the CMB anisotropy spectrum.

  13. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  14. Development of an automated energy audit protocol for office buildings

    NASA Astrophysics Data System (ADS)

    Deb, Chirag

    This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.

  15. Foreign exchange market data analysis reveals statistical features that predict price movement acceleration.

    PubMed

    Nacher, Jose C; Ochiai, Tomoshiro

    2012-05-01

    Increasingly accessible financial data allow researchers to infer market-dynamics-based laws and to propose models that are able to reproduce them. In recent years, several stylized facts have been uncovered. Here we perform an extensive analysis of foreign exchange data that leads to the unveiling of a statistical financial law. First, our findings show that, on average, volatility increases more when the price exceeds the highest (or lowest) value, i.e., breaks the resistance line. We call this the breaking-acceleration effect. Second, our results show that the probability P(T) to break the resistance line in the past time T follows power law in both real data and theoretically simulated data. However, the probability calculated using real data is rather lower than the one obtained using a traditional Black-Scholes (BS) model. Taken together, the present analysis characterizes a different stylized fact of financial markets and shows that the market exceeds a past (historical) extreme price fewer times than expected by the BS model (the resistance effect). However, when the market does, we predict that the average volatility at that time point will be much higher. These findings indicate that any Markovian model does not faithfully capture the market dynamics.

  16. Foreign exchange market data analysis reveals statistical features that predict price movement acceleration

    NASA Astrophysics Data System (ADS)

    Nacher, Jose C.; Ochiai, Tomoshiro

    2012-05-01

    Increasingly accessible financial data allow researchers to infer market-dynamics-based laws and to propose models that are able to reproduce them. In recent years, several stylized facts have been uncovered. Here we perform an extensive analysis of foreign exchange data that leads to the unveiling of a statistical financial law. First, our findings show that, on average, volatility increases more when the price exceeds the highest (or lowest) value, i.e., breaks the resistance line. We call this the breaking-acceleration effect. Second, our results show that the probability P(T) to break the resistance line in the past time T follows power law in both real data and theoretically simulated data. However, the probability calculated using real data is rather lower than the one obtained using a traditional Black-Scholes (BS) model. Taken together, the present analysis characterizes a different stylized fact of financial markets and shows that the market exceeds a past (historical) extreme price fewer times than expected by the BS model (the resistance effect). However, when the market does, we predict that the average volatility at that time point will be much higher. These findings indicate that any Markovian model does not faithfully capture the market dynamics.

  17. Pilot-model analysis and simulation study of effect of control task desired control response

    NASA Technical Reports Server (NTRS)

    Adams, J. J.; Gera, J.; Jaudon, J. B.

    1978-01-01

    A pilot model analysis was performed that relates pilot control compensation, pilot aircraft system response, and aircraft response characteristics for longitudinal control. The results show that a higher aircraft short period frequency is required to achieve superior pilot aircraft system response in an altitude control task than is required in an attitude control task. These results were confirmed by a simulation study of target tracking. It was concluded that the pilot model analysis provides a theoretical basis for determining the effect of control task on pilot opinions.

  18. Systems thinking, the Swiss Cheese Model and accident analysis: a comparative systemic analysis of the Grayrigg train derailment using the ATSB, AcciMap and STAMP models.

    PubMed

    Underwood, Peter; Waterson, Patrick

    2014-07-01

    The Swiss Cheese Model (SCM) is the most popular accident causation model and is widely used throughout various industries. A debate exists in the research literature over whether the SCM remains a viable tool for accident analysis. Critics of the model suggest that it provides a sequential, oversimplified view of accidents. Conversely, proponents suggest that it embodies the concepts of systems theory, as per the contemporary systemic analysis techniques. The aim of this paper was to consider whether the SCM can provide a systems thinking approach and remain a viable option for accident analysis. To achieve this, the train derailment at Grayrigg was analysed with an SCM-based model (the ATSB accident investigation model) and two systemic accident analysis methods (AcciMap and STAMP). The analysis outputs and usage of the techniques were compared. The findings of the study showed that each model applied the systems thinking approach. However, the ATSB model and AcciMap graphically presented their findings in a more succinct manner, whereas STAMP more clearly embodied the concepts of systems theory. The study suggests that, whilst the selection of an analysis method is subject to trade-offs that practitioners and researchers must make, the SCM remains a viable model for accident analysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Penetration analysis of projectile with inclined concrete target

    NASA Astrophysics Data System (ADS)

    Kim, S. B.; Kim, H. W.; Yoo, Y. H.

    2015-09-01

    This paper presents numerical analysis result of projectile penetration with concrete target. We applied dynamic material properties of 4340 steels, aluminium and explosive for projectile body. Dynamic material properties were measured with static tensile testing machine and Hopkinson pressure bar tests. Moreover, we used three concrete damage models included in LS-DYNA 3D, such as SOIL_CONCRETE, CSCM (cap model with smooth interaction) and CONCRETE_DAMAGE (K&C concrete) models. Strain rate effect for concrete material is important to predict the fracture deformation and shape of concrete, and penetration depth for projectiles. CONCRETE_DAMAGE model with strain rate effect also applied to penetration analysis. Analysis result with CSCM model shows good agreement with penetration experimental data. The projectile trace and fracture shapes of concrete target were compared with experimental data.

  20. Structural Acoustic Physics Based Modeling of Curved Composite Shells

    DTIC Science & Technology

    2017-09-19

    Results show that the finite element computational models accurately match analytical calculations, and that the composite material studied in this...products. 15. SUBJECT TERMS Finite Element Analysis, Structural Acoustics, Fiber-Reinforced Composites, Physics-Based Modeling 16. SECURITY...2 4 FINITE ELEMENT MODEL DESCRIPTION

  1. Laser power conversion system analysis, volume 1

    NASA Technical Reports Server (NTRS)

    Jones, W. S.; Morgan, L. L.; Forsyth, J. B.; Skratt, J. P.

    1979-01-01

    The orbit-to-orbit laser energy conversion system analysis established a mission model of satellites with various orbital parameters and average electrical power requirements ranging from 1 to 300 kW. The system analysis evaluated various conversion techniques, power system deployment parameters, power system electrical supplies and other critical supplies and other critical subsystems relative to various combinations of the mission model. The analysis show that the laser power system would not be competitive with current satellite power systems from weight, cost and development risk standpoints.

  2. Using argument notation to engineer biological simulations with increased confidence

    PubMed Central

    Alden, Kieran; Andrews, Paul S.; Polack, Fiona A. C.; Veiga-Fernandes, Henrique; Coles, Mark C.; Timmis, Jon

    2015-01-01

    The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions. PMID:25589574

  3. Using argument notation to engineer biological simulations with increased confidence.

    PubMed

    Alden, Kieran; Andrews, Paul S; Polack, Fiona A C; Veiga-Fernandes, Henrique; Coles, Mark C; Timmis, Jon

    2015-03-06

    The application of computational and mathematical modelling to explore the mechanics of biological systems is becoming prevalent. To significantly impact biological research, notably in developing novel therapeutics, it is critical that the model adequately represents the captured system. Confidence in adopting in silico approaches can be improved by applying a structured argumentation approach, alongside model development and results analysis. We propose an approach based on argumentation from safety-critical systems engineering, where a system is subjected to a stringent analysis of compliance against identified criteria. We show its use in examining the biological information upon which a model is based, identifying model strengths, highlighting areas requiring additional biological experimentation and providing documentation to support model publication. We demonstrate our use of structured argumentation in the development of a model of lymphoid tissue formation, specifically Peyer's Patches. The argumentation structure is captured using Artoo (www.york.ac.uk/ycil/software/artoo), our Web-based tool for constructing fitness-for-purpose arguments, using a notation based on the safety-critical goal structuring notation. We show how argumentation helps in making the design and structured analysis of a model transparent, capturing the reasoning behind the inclusion or exclusion of each biological feature and recording assumptions, as well as pointing to evidence supporting model-derived conclusions.

  4. The comparison of proportional hazards and accelerated failure time models in analyzing the first birth interval survival data

    NASA Astrophysics Data System (ADS)

    Faruk, Alfensi

    2018-03-01

    Survival analysis is a branch of statistics, which is focussed on the analysis of time- to-event data. In multivariate survival analysis, the proportional hazards (PH) is the most popular model in order to analyze the effects of several covariates on the survival time. However, the assumption of constant hazards in PH model is not always satisfied by the data. The violation of the PH assumption leads to the misinterpretation of the estimation results and decreasing the power of the related statistical tests. On the other hand, the accelerated failure time (AFT) models do not assume the constant hazards in the survival data as in PH model. The AFT models, moreover, can be used as the alternative to PH model if the constant hazards assumption is violated. The objective of this research was to compare the performance of PH model and the AFT models in analyzing the significant factors affecting the first birth interval (FBI) data in Indonesia. In this work, the discussion was limited to three AFT models which were based on Weibull, exponential, and log-normal distribution. The analysis by using graphical approach and a statistical test showed that the non-proportional hazards exist in the FBI data set. Based on the Akaike information criterion (AIC), the log-normal AFT model was the most appropriate model among the other considered models. Results of the best fitted model (log-normal AFT model) showed that the covariates such as women’s educational level, husband’s educational level, contraceptive knowledge, access to mass media, wealth index, and employment status were among factors affecting the FBI in Indonesia.

  5. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  6. Ambiguities in model-independent partial-wave analysis

    NASA Astrophysics Data System (ADS)

    Krinner, F.; Greenwald, D.; Ryabchikov, D.; Grube, B.; Paul, S.

    2018-06-01

    Partial-wave analysis is an important tool for analyzing large data sets in hadronic decays of light and heavy mesons. It commonly relies on the isobar model, which assumes multihadron final states originate from successive two-body decays of well-known undisturbed intermediate states. Recently, analyses of heavy-meson decays and diffractively produced states have attempted to overcome the strong model dependences of the isobar model. These analyses have overlooked that model-independent, or freed-isobar, partial-wave analysis can introduce mathematical ambiguities in results. We show how these ambiguities arise and present general techniques for identifying their presence and for correcting for them. We demonstrate these techniques with specific examples in both heavy-meson decay and pion-proton scattering.

  7. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global sensitivity analysis results.

  8. [Influence of sample surface roughness on mathematical model of NIR quantitative analysis of wood density].

    PubMed

    Huang, An-Min; Fei, Ben-Hua; Jiang, Ze-Hui; Hse, Chung-Yun

    2007-09-01

    Near infrared spectroscopy is widely used as a quantitative method, and the main multivariate techniques consist of regression methods used to build prediction models, however, the accuracy of analysis results will be affected by many factors. In the present paper, the influence of different sample roughness on the mathematical model of NIR quantitative analysis of wood density was studied. The result of experiments showed that if the roughness of predicted samples was consistent with that of calibrated samples, the result was good, otherwise the error would be much higher. The roughness-mixed model was more flexible and adaptable to different sample roughness. The prediction ability of the roughness-mixed model was much better than that of the single-roughness model.

  9. Relating Factor Models for Longitudinal Data to Quasi-Simplex and NARMA Models

    ERIC Educational Resources Information Center

    Rovine, Michael J.; Molenaar, Peter C. M.

    2005-01-01

    In this article we show the one-factor model can be rewritten as a quasi-simplex model. Using this result along with addition theorems from time series analysis, we describe a common general model, the nonstationary autoregressive moving average (NARMA) model, that includes as a special case, any latent variable model with continuous indicators…

  10. Comparative transcriptome analysis reveals vertebrate phylotypic period during organogenesis

    PubMed Central

    Irie, Naoki; Kuratani, Shigeru

    2011-01-01

    One of the central issues in evolutionary developmental biology is how we can formulate the relationships between evolutionary and developmental processes. Two major models have been proposed: the 'funnel-like' model, in which the earliest embryo shows the most conserved morphological pattern, followed by diversifying later stages, and the 'hourglass' model, in which constraints are imposed to conserve organogenesis stages, which is called the phylotypic period. Here we perform a quantitative comparative transcriptome analysis of several model vertebrate embryos and show that the pharyngula stage is most conserved, whereas earlier and later stages are rather divergent. These results allow us to predict approximate developmental timetables between different species, and indicate that pharyngula embryos have the most conserved gene expression profiles, which may be the source of the basic body plan of vertebrates. PMID:21427719

  11. PDF-based heterogeneous multiscale filtration model.

    PubMed

    Gong, Jian; Rutland, Christopher J

    2015-04-21

    Motivated by modeling of gasoline particulate filters (GPFs), a probability density function (PDF) based heterogeneous multiscale filtration (HMF) model is developed to calculate filtration efficiency of clean particulate filters. A new methodology based on statistical theory and classic filtration theory is developed in the HMF model. Based on the analysis of experimental porosimetry data, a pore size probability density function is introduced to represent heterogeneity and multiscale characteristics of the porous wall. The filtration efficiency of a filter can be calculated as the sum of the contributions of individual collectors. The resulting HMF model overcomes the limitations of classic mean filtration models which rely on tuning of the mean collector size. Sensitivity analysis shows that the HMF model recovers the classical mean model when the pore size variance is very small. The HMF model is validated by fundamental filtration experimental data from different scales of filter samples. The model shows a good agreement with experimental data at various operating conditions. The effects of the microstructure of filters on filtration efficiency as well as the most penetrating particle size are correctly predicted by the model.

  12. Biomechanical analysis on fracture risk associated with bone deformity

    NASA Astrophysics Data System (ADS)

    Kamal, Nur Amalina Nadiah Mustafa; Som, Mohd Hanafi Mat; Basaruddin, Khairul Salleh; Daud, Ruslizam

    2017-09-01

    Osteogenesis Imperfecta (OI) is a disease related to bone deformity and is also known as `brittle bone' disease. Currently, medical personnel predict the bone fracture solely based on their experience. In this study, the prediction for risk of fracture was carried out by using finite element analysis on the simulated OI bone of femur. The main objective of this research was to analyze the fracture risk of OI-affected bone with respect to various loadings. A total of 12 models of OI bone were developed by applying four load cases and the angle of deformation for each of the models was calculated. The models were differentiated into four groups, namely standard, light, mild and severe. The results show that only a small amount of load is required to increase the fracture risk of the bone when the model is tested with hopping conditions. The analysis also shows that the torsional load gives a small effect to the increase of the fracture risk of the bone.

  13. Research on evaluating water resource resilience based on projection pursuit classification model

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Zhao, Dan; Liang, Xu; Wu, Qiuchen

    2016-03-01

    Water is a fundamental natural resource while agriculture water guarantees the grain output, which shows that the utilization and management of water resource have a significant practical meaning. Regional agricultural water resource system features with unpredictable, self-organization, and non-linear which lays a certain difficulty on the evaluation of regional agriculture water resource resilience. The current research on water resource resilience remains to focus on qualitative analysis and the quantitative analysis is still in the primary stage, thus, according to the above issues, projection pursuit classification model is brought forward. With the help of artificial fish-swarm algorithm (AFSA), it optimizes the projection index function, seeks for the optimal projection direction, and improves AFSA with the application of self-adaptive artificial fish step and crowding factor. Taking Hongxinglong Administration of Heilongjiang as the research base and on the basis of improving AFSA, it established the evaluation of projection pursuit classification model to agriculture water resource system resilience besides the proceeding analysis of projection pursuit classification model on accelerating genetic algorithm. The research shows that the water resource resilience of Hongxinglong is the best than Raohe Farm, and the last 597 Farm. And the further analysis shows that the key driving factors influencing agricultural water resource resilience are precipitation and agriculture water consumption. The research result reveals the restoring situation of the local water resource system, providing foundation for agriculture water resource management.

  14. Long-term creep characterization of Gr. 91 steel by modified creep constitutive equations

    NASA Astrophysics Data System (ADS)

    Kim, Woo-Gon; Kim, Sung-Ho; Lee, Chan-Bock

    2011-06-01

    This paper focuses on the long-term creep characterization of Gr. 91 steel using creep constitutive equations. The models of three such equations, a combination of power-law form and omega model (CPO), a combination of exponential form and omega model (CEO), and a combination of logarithmic form and omega model (CLO), which are described as sum decaying primary creep and accelerating tertiary creep, are proposed. A series of creep rupture data was obtained through creep tests with various applied loads at 600 °C. On the basis of the creep data, a nonlinear least-square fitting (NLSF) analysis was carried out to provide the best fit with the experimental data in optimizing the parameter constants of an individual equation. The results of the NLSF analysis showed that in the lower stress regions of 160 MPa (σ/σys <0.65), the CEO model showed a match with the experimental creep data comparable to those of the CPO and CLO models; however, in the higher stress regions of 160 MPa (σ/σy > 0.65), the CPO model showed better agreement than the other two models. It was found that the CEO model was superior to the CPO and CLO models in the modeling of long-term creep curves. Using the CEO model, the long-term creep curves of Gr. 91 steel were numerically characterized, and its creep life was predicted accurately.

  15. A FAST BAYESIAN METHOD FOR UPDATING AND FORECASTING HOURLY OZONE LEVELS

    EPA Science Inventory

    A Bayesian hierarchical space-time model is proposed by combining information from real-time ambient AIRNow air monitoring data, and output from a computer simulation model known as the Community Multi-scale Air Quality (Eta-CMAQ) forecast model. A model validation analysis shows...

  16. Coupled attenuation and multiscale damage model for composite structures

    NASA Astrophysics Data System (ADS)

    Moncada, Albert M.; Chattopadhyay, Aditi; Bednarcyk, Brett; Arnold, Steven M.

    2011-04-01

    Composite materials are widely used in many applications for their high strength, low weight, and tailorability for specific applications. However, the development of robust and reliable methodologies to detect micro level damage in composite structures has been challenging. For composite materials, attenuation of ultrasonic waves propagating through the media can be used to determine damage within the material. Currently available numerical solutions for attenuation induce arbitrary damage, such as fiber-matrix debonding or inclusions, to show variations between healthy and damaged states. This paper addresses this issue by integrating a micromechanics analysis to simulate damage in the form of a fiber-matrix crack and an analytical model for calculating the attenuation of the waves when they pass through the damaged region. The hybrid analysis is validated by comparison with experimental stress-strain curves and piezoelectric sensing results for attenuation measurement. The results showed good agreement between the experimental stress-strain curves and the results from the micromechanics analysis. Wave propagation analysis also showed good correlation between simulation and experiment for the tested frequency range.

  17. Informations in Models of Evolutionary Dynamics

    NASA Astrophysics Data System (ADS)

    Rivoire, Olivier

    2016-03-01

    Biological organisms adapt to changes by processing informations from different sources, most notably from their ancestors and from their environment. We review an approach to quantify these informations by analyzing mathematical models of evolutionary dynamics and show how explicit results are obtained for a solvable subclass of these models. In several limits, the results coincide with those obtained in studies of information processing for communication, gambling or thermodynamics. In the most general case, however, information processing by biological populations shows unique features that motivate the analysis of specific models.

  18. Empirical intrinsic geometry for nonlinear modeling and time series filtering.

    PubMed

    Talmon, Ronen; Coifman, Ronald R

    2013-07-30

    In this paper, we present a method for time series analysis based on empirical intrinsic geometry (EIG). EIG enables one to reveal the low-dimensional parametric manifold as well as to infer the underlying dynamics of high-dimensional time series. By incorporating concepts of information geometry, this method extends existing geometric analysis tools to support stochastic settings and parametrizes the geometry of empirical distributions. However, the statistical models are not required as priors; hence, EIG may be applied to a wide range of real signals without existing definitive models. We show that the inferred model is noise-resilient and invariant under different observation and instrumental modalities. In addition, we show that it can be extended efficiently to newly acquired measurements in a sequential manner. These two advantages enable us to revisit the Bayesian approach and incorporate empirical dynamics and intrinsic geometry into a nonlinear filtering framework. We show applications to nonlinear and non-Gaussian tracking problems as well as to acoustic signal localization.

  19. Generation of High Frequency Response in a Dynamically Loaded, Nonlinear Soil Column

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spears, Robert Edward; Coleman, Justin Leigh

    2015-08-01

    Detailed guidance on linear seismic analysis of soil columns is provided in “Seismic Analysis of Safety-Related Nuclear Structures and Commentary (ASCE 4, 1998),” which is currently under revision. A new Appendix in ASCE 4-2014 (draft) is being added to provide guidance for nonlinear time domain analysis which includes evaluation of soil columns. When performing linear analysis, a given soil column is typically evaluated with a linear, viscous damped constitutive model. When submitted to a sine wave motion, this constitutive model produces a smooth hysteresis loop. For nonlinear analysis, the soil column can be modelled with an appropriate nonlinear hysteretic soilmore » model. For the model in this paper, the stiffness and energy absorption result from a defined post yielding shear stress versus shear strain curve. This curve is input with tabular data points. When submitted to a sine wave motion, this constitutive model produces a hysteresis loop that looks similar in shape to the input tabular data points on the sides with discontinuous, pointed ends. This paper compares linear and nonlinear soil column results. The results show that the nonlinear analysis produces additional high frequency response. The paper provides additional study to establish what portion of the high frequency response is due to numerical noise associated with the tabular input curve and what portion is accurately caused by the pointed ends of the hysteresis loop. Finally, the paper shows how the results are changed when a significant structural mass is added to the top of the soil column.« less

  20. Earth Observatory Satellite system definition study. Report 1: Orbit/launch vehicle trade-off studies and recommendations

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A summary of the constraints and requirements on the Earth Observatory Satellite (EOS-A) orbit and launch vehicle analysis is presented. The propulsion system (hydrazine) and the launch vehicle (Delta 2910) selected for EOS-A are examined. The rationale for the selection of the recommended orbital altitude of 418 nautical miles is explained. The original analysis was based on the EOS-A mission with the Thematic Mapper and the High Resolution Pointable Imager. The impact of the revised mission model is analyzed to show how the new mission model affects the previously defined propulsion system, launch vehicle, and orbit. A table is provided to show all aspects of the EOS multiple mission concepts. The subjects considered include the following: (1) mission orbit analysis, (2) spacecraft parametric performance analysis, (3) launch system performance analysis, and (4) orbits/launch vehicle selection.

  1. In vitro burn model illustrating heat conduction patterns using compressed thermal papers.

    PubMed

    Lee, Jun Yong; Jung, Sung-No; Kwon, Ho

    2015-01-01

    To date, heat conduction from heat sources to tissue has been estimated by complex mathematical modeling. In the present study, we developed an intuitive in vitro skin burn model that illustrates heat conduction patterns inside the skin. This was composed of tightly compressed thermal papers with compression frames. Heat flow through the model left a trace by changing the color of thermal papers. These were digitized and three-dimensionally reconstituted to reproduce the heat conduction patterns in the skin. For standardization, we validated K91HG-CE thermal paper using a printout test and bivariate correlation analysis. We measured the papers' physical properties and calculated the estimated depth of heat conduction using Fourier's equation. Through contact burns of 5, 10, 15, 20, and 30 seconds on porcine skin and our burn model using a heated brass comb, and comparing the burn wound and heat conduction trace, we validated our model. The heat conduction pattern correlation analysis (intraclass correlation coefficient: 0.846, p < 0.001) and the heat conduction depth correlation analysis (intraclass correlation coefficient: 0.93, p < 0.001) showed statistically significant high correlations between the porcine burn wound and our model. Our model showed good correlation with porcine skin burn injury and replicated its heat conduction patterns. © 2014 by the Wound Healing Society.

  2. Conclusion of LOD-score analysis for family data generated under two-locus models.

    PubMed

    Dizier, M H; Babron, M C; Clerget-Darpoux, F

    1996-06-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.

  3. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  4. Determinants of Educational Achievement in Morocco: A Micro-Econometric Analysis Applied to the TIMSS Study

    ERIC Educational Resources Information Center

    Ibourk, Aomar

    2013-01-01

    Based on data from international surveys measuring learning (TIMSS), this article focuses on the analysis of the academic performance Moroccan students. The results of the econometric model show that the students' characteristics, their family environment and school context are key determinants of these performances. The study also shows that the…

  5. TWO-STAGE FRAGMENTATION FOR CLUSTER FORMATION: ANALYTICAL MODEL AND OBSERVATIONAL CONSIDERATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Nicole D.; Basu, Shantanu, E-mail: nwityk@uwo.ca, E-mail: basu@uwo.ca

    2012-12-10

    Linear analysis of the formation of protostellar cores in planar magnetic interstellar clouds shows that molecular clouds exhibit a preferred length scale for collapse that depends on the mass-to-flux ratio and neutral-ion collision time within the cloud. We extend this linear analysis to the context of clustered star formation. By combining the results of the linear analysis with a realistic ionization profile for the cloud, we find that a molecular cloud may evolve through two fragmentation events in the evolution toward the formation of stars. Our model suggests that the initial fragmentation into clumps occurs for a transcritical cloud onmore » parsec scales while the second fragmentation can occur for transcritical and supercritical cores on subparsec scales. Comparison of our results with several star-forming regions (Perseus, Taurus, Pipe Nebula) shows support for a two-stage fragmentation model.« less

  6. Turing pattern dynamics and adaptive discretization for a super-diffusive Lotka-Volterra model.

    PubMed

    Bendahmane, Mostafa; Ruiz-Baier, Ricardo; Tian, Canrong

    2016-05-01

    In this paper we analyze the effects of introducing the fractional-in-space operator into a Lotka-Volterra competitive model describing population super-diffusion. First, we study how cross super-diffusion influences the formation of spatial patterns: a linear stability analysis is carried out, showing that cross super-diffusion triggers Turing instabilities, whereas classical (self) super-diffusion does not. In addition we perform a weakly nonlinear analysis yielding a system of amplitude equations, whose study shows the stability of Turing steady states. A second goal of this contribution is to propose a fully adaptive multiresolution finite volume method that employs shifted Grünwald gradient approximations, and which is tailored for a larger class of systems involving fractional diffusion operators. The scheme is aimed at efficient dynamic mesh adaptation and substantial savings in computational burden. A numerical simulation of the model was performed near the instability boundaries, confirming the behavior predicted by our analysis.

  7. Complete Hamiltonian analysis of cosmological perturbations at all orders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2016-06-01

    In this work, we present a consistent Hamiltonian analysis of cosmological perturbations at all orders. To make the procedure transparent, we consider a simple model and resolve the 'gauge-fixing' issues and extend the analysis to scalar field models and show that our approach can be applied to any order of perturbation for any first order derivative fields. In the case of Galilean scalar fields, our procedure can extract constrained relations at all orders in perturbations leading to the fact that there is no extra degrees of freedom due to the presence of higher time derivatives of the field in themore » Lagrangian. We compare and contrast our approach to the Lagrangian approach (Chen et al. [2006]) for extracting higher order correlations and show that our approach is efficient and robust and can be applied to any model of gravity and matter fields without invoking slow-roll approximation.« less

  8. Complex time series analysis of PM10 and PM2.5 for a coastal site using artificial neural network modelling and k-means clustering

    NASA Astrophysics Data System (ADS)

    Elangasinghe, M. A.; Singhal, N.; Dirks, K. N.; Salmond, J. A.; Samarasinghe, S.

    2014-09-01

    This paper uses artificial neural networks (ANN), combined with k-means clustering, to understand the complex time series of PM10 and PM2.5 concentrations at a coastal location of New Zealand based on data from a single site. Out of available meteorological parameters from the network (wind speed, wind direction, solar radiation, temperature, relative humidity), key factors governing the pattern of the time series concentrations were identified through input sensitivity analysis performed on the trained neural network model. The transport pathways of particulate matter under these key meteorological parameters were further analysed through bivariate concentration polar plots and k-means clustering techniques. The analysis shows that the external sources such as marine aerosols and local sources such as traffic and biomass burning contribute equally to the particulate matter concentrations at the study site. These results are in agreement with the results of receptor modelling by the Auckland Council based on Positive Matrix Factorization (PMF). Our findings also show that contrasting concentration-wind speed relationships exist between marine aerosols and local traffic sources resulting in very noisy and seemingly large random PM10 concentrations. The inclusion of cluster rankings as an input parameter to the ANN model showed a statistically significant (p < 0.005) improvement in the performance of the ANN time series model and also showed better performance in picking up high concentrations. For the presented case study, the correlation coefficient between observed and predicted concentrations improved from 0.77 to 0.79 for PM2.5 and from 0.63 to 0.69 for PM10 and reduced the root mean squared error (RMSE) from 5.00 to 4.74 for PM2.5 and from 6.77 to 6.34 for PM10. The techniques presented here enable the user to obtain an understanding of potential sources and their transport characteristics prior to the implementation of costly chemical analysis techniques or advanced air dispersion models.

  9. Modeling Pumped Thermal Energy Storage with Waste Heat Harvesting

    NASA Astrophysics Data System (ADS)

    Abarr, Miles L. Lindsey

    This work introduces a new concept for a utility scale combined energy storage and generation system. The proposed design utilizes a pumped thermal energy storage (PTES) system, which also utilizes waste heat leaving a natural gas peaker plant. This system creates a low cost utility-scale energy storage system by leveraging this dual-functionality. This dissertation first presents a review of previous work in PTES as well as the details of the proposed integrated bottoming and energy storage system. A time-domain system model was developed in Mathworks R2016a Simscape and Simulink software to analyze this system. Validation of both the fluid state model and the thermal energy storage model are provided. The experimental results showed the average error in cumulative fluid energy between simulation and measurement was +/- 0.3% per hour. Comparison to a Finite Element Analysis (FEA) model showed <1% error for bottoming mode heat transfer. The system model was used to conduct sensitivity analysis, baseline performance, and levelized cost of energy of a recently proposed Pumped Thermal Energy Storage and Bottoming System (Bot-PTES) that uses ammonia as the working fluid. This analysis focused on the effects of hot thermal storage utilization, system pressure, and evaporator/condenser size on the system performance. This work presents the estimated performance for a proposed baseline Bot-PTES. Results of this analysis showed that all selected parameters had significant effects on efficiency, with the evaporator/condenser size having the largest effect over the selected ranges. Results for the baseline case showed stand-alone energy storage efficiencies between 51 and 66% for varying power levels and charge states, and a stand-alone bottoming efficiency of 24%. The resulting efficiencies for this case were low compared to competing technologies; however, the dual-functionality of the Bot-PTES enables it to have higher capacity factor, leading to 91-197/MWh levelized cost of energy compared to 262-284/MWh for batteries and $172-254/MWh for Compressed Air Energy Storage.

  10. Development of Semi-distributed ecohydrological model in the Rio Grande De Manati River Basin, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Setegn, S. G.; Ortiz, J.; Melendez, J.; Barreto, M.; Torres-Perez, J. L.; Guild, L. S.

    2015-12-01

    There are limited studies in Puerto Rico that shows the water resources availability and variability with respect to changing climates and land use. The main goal of the HICE-PR (Human Impacts to Coastal Ecosystems in Puerto Rico (HICE-PR): the Río Loco Watershed (southwest coast PR) project which was funded by NASA is to evaluate the impacts of land use/land cover changes on the quality and extent of coastal and marine ecosystems (CMEs) in two priority watersheds in Puerto Rico (Manatí and Guánica).The main objective of this study is to set up a physically based spatially distributed hydrological model, Soil and Water Assessment Tool (SWAT) for the analysis of hydrological processes in the Rio Grande de Manati river basin. SWAT (soil and water assessment tool) is a spatially distributed watershed model developed to predict the impact of land management practices on water, sediment and agricultural chemical yields in large complex watersheds. For efficient use of distributed models for hydrological and scenario analysis, it is important that these models pass through a careful calibration and uncertainty analysis. The model was calibrated and validated using Sequential Uncertainty Fitting (SUFI-2) calibration and uncertainty analysis algorithms. The model evaluation statistics for streamflows prediction shows that there is a good agreement between the measured and simulated flows that was verified by coefficients of determination and Nash Sutcliffe efficiency greater than 0.5. Keywords: Hydrological Modeling; SWAT; SUFI-2; Rio Grande De Manati; Puerto Rico

  11. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Analysis of mortality data from the former USSR: age-period-cohort analysis.

    PubMed

    Willekens, F; Scherbov, S

    1992-01-01

    The objective of this article is to review research on age-period-cohort (APC) analysis of mortality and to trace the effects of contemporary and historical factors on mortality change in the former USSR. Several events in USSR history have exerted a lasting influence on its people. These influences may be captured by an APC model in which the period effects measure the impact of contemporary factors and the cohort effects the past history of individuals which cannot be attributed to age or stage in the life cycle. APC models are extensively applied in the study of mortality. This article presents the statistical theory of the APC models and shows that they belong to the family of generalized linear models. The parameters of the APC model may therefore be estimated by any package of loglinear analysis that allows for hybrid loglinear models.

  13. Modeling error distributions of growth curve models through Bayesian methods.

    PubMed

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  14. Ozone Temporal Variability in the Subarctic Region: Comparison of Satellite Measurements with Numerical Simulations

    NASA Astrophysics Data System (ADS)

    Shved, G. M.; Virolainen, Ya. A.; Timofeyev, Yu. M.; Ermolenko, S. I.; Smyshlyaev, S. P.; Motsakov, M. A.; Kirner, O.

    2018-01-01

    Fourier and wavelet spectra of time series for the ozone column abundance in the atmospheric 0-25 and 25-60 km layers are analyzed from SBUV satellite observations and from numerical simulations based on the RSHU and EMAC models. The analysis uses datasets for three subarctic locations (St. Petersburg, Harestua, and Kiruna) for 2000-2014. The Fourier and wavelet spectra show periodicities in the range from 10 days to 10 years and from 1 day to 2 years, respectively. The comparison of the spectra shows overall agreement between the observational and modeled datasets. However, the analysis has revealed differences both between the measurements and the models and between the models themselves. The differences primarily concern the Rossby wave period region and the 11-year and semiannual periodicities. Possible reasons are given for the differences between the models and the measurements.

  15. Bayesian structural equation modeling: a more flexible representation of substantive theory.

    PubMed

    Muthén, Bengt; Asparouhov, Tihomir

    2012-09-01

    This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

  16. Cp Asymmetries in B0DECAYS Beyond the Standard Model

    NASA Astrophysics Data System (ADS)

    Dib, Claudio O.; London, David; Nir, Yosef

    Of the many ingredients of the Standard Model that are relevant to the analysis of CP asymmetries in B0 decays, some are likely to hold even beyond the Standard Model while others are sensitive to new physics. Consequently, certain predictions are maintained while others may show dramatic deviations from the Standard Model. Many classes of models may show clear signatures when the asymmetries are measured: four quark generations, Z-mediated flavor-changing neutral currents, supersymmetry and “real superweak” models. On the other hand, models of left-right symmetry and multi-Higgs sectors with natural flavor conservation are unlikely to modify the Standard Model predictions.

  17. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  18. Premium analysis for copula model: A case study for Malaysian motor insurance claims

    NASA Astrophysics Data System (ADS)

    Resti, Yulia; Ismail, Noriszura; Jaaman, Saiful Hafizah

    2014-06-01

    This study performs premium analysis for copula models with regression marginals. For illustration purpose, the copula models are fitted to the Malaysian motor insurance claims data. In this study, we consider copula models from Archimedean and Elliptical families, and marginal distributions of Gamma and Inverse Gaussian regression models. The simulated results from independent model, which is obtained from fitting regression models separately to each claim category, and dependent model, which is obtained from fitting copula models to all claim categories, are compared. The results show that the dependent model using Frank copula is the best model since the risk premiums estimated under this model are closely approximate to the actual claims experience relative to the other copula models.

  19. Verification and validation of a Work Domain Analysis with turing machine task analysis.

    PubMed

    Rechard, J; Bignon, A; Berruet, P; Morineau, T

    2015-03-01

    While the use of Work Domain Analysis as a methodological framework in cognitive engineering is increasing rapidly, verification and validation of work domain models produced by this method are becoming a significant issue. In this article, we propose the use of a method based on Turing machine formalism named "Turing Machine Task Analysis" to verify and validate work domain models. The application of this method on two work domain analyses, one of car driving which is an "intentional" domain, and the other of a ship water system which is a "causal domain" showed the possibility of highlighting improvements needed by these models. More precisely, the step by step analysis of a degraded task scenario in each work domain model pointed out unsatisfactory aspects in the first modelling, like overspecification, underspecification, omission of work domain affordances, or unsuitable inclusion of objects in the work domain model. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  20. Modelling of groundwater quality using bicarbonate chemical parameter in Netravathi and Gurpur river confluence, India

    NASA Astrophysics Data System (ADS)

    Sylus, K. J.; H., Ramesh

    2018-04-01

    In the coastal aquifer, seawater intrusion considered the major problem which contaminates freshwater and reduces its quality for domestic use. In order to find seawater intrusion, the groundwater quality analysis for the different chemical parameter was considered as the basic method to find out contamination. This analysis was carried out as per Bureau of Indian standards (2012) and World Health Organisations (1996). In this study, Bicarbonate parameter was considered for groundwater quality analysis which ranges the permissible limit in between 200-600 mg/l. The groundwater system was modelled using Groundwater modelling software (GMS) in which the FEMWATER package used for flow and transport. The FEMWATER package works in the principle of finite element method. The base input data of model include elevation, Groundwater head, First bottom and second bottom of the study area. The modelling results show the spatial occurrence of contamination in the study area of Netravathi and Gurpur river confluence at the various time period. Further, the results of the modelling also show that the contamination occurs up to a distance of 519m towards the freshwater zone of the study area.

  1. Application of logistic regression to case-control association studies involving two causative loci.

    PubMed

    North, Bernard V; Curtis, David; Sham, Pak C

    2005-01-01

    Models in which two susceptibility loci jointly influence the risk of developing disease can be explored using logistic regression analysis. Comparison of likelihoods of models incorporating different sets of disease model parameters allows inferences to be drawn regarding the nature of the joint effect of the loci. We have simulated case-control samples generated assuming different two-locus models and then analysed them using logistic regression. We show that this method is practicable and that, for the models we have used, it can be expected to allow useful inferences to be drawn from sample sizes consisting of hundreds of subjects. Interactions between loci can be explored, but interactive effects do not exactly correspond with classical definitions of epistasis. We have particularly examined the issue of the extent to which it is helpful to utilise information from a previously identified locus when investigating a second, unknown locus. We show that for some models conditional analysis can have substantially greater power while for others unconditional analysis can be more powerful. Hence we conclude that in general both conditional and unconditional analyses should be performed when searching for additional loci.

  2. Regional surface soil heat flux estimate from multiple remote sensing data in a temperate and semiarid basin

    NASA Astrophysics Data System (ADS)

    Li, Nana; Jia, Li; Lu, Jing; Menenti, Massimo; Zhou, Jie

    2017-01-01

    The regional surface soil heat flux (G0) estimation is very important for the large-scale land surface process modeling. However, most of the regional G0 estimation methods are based on the empirical relationship between G0 and the net radiation flux. A physical model based on harmonic analysis was improved (referred to as "HM model") and applied over the Heihe River Basin northwest China with multiple remote sensing data, e.g., FY-2C, AMSR-E, and MODIS, and soil map data. The sensitivity analysis of the model was studied as well. The results show that the improved model describes the variation of G0 well. Land surface temperature (LST) and thermal inertia (Γ) are the two key input variables to the HM model. Compared with in situ G0, there are some differences, mainly due to the differences between remote-sensed LST and the in situ LST. The sensitivity analysis shows that the errors from -7 to -0.5 K in LST amplitude and from -300 to 300 J m-2 K-1 s-0.5 in Γ will cause about 20% errors, which are acceptable for G0 estimation.

  3. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  4. Uncertainty Quantification Analysis of Both Experimental and CFD Simulation Data of a Bench-scale Fluidized Bed Gasifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahnam, Mehrdad; Gel, Aytekin; Subramaniyan, Arun K.

    Adequate assessment of the uncertainties in modeling and simulation is becoming an integral part of the simulation based engineering design. The goal of this study is to demonstrate the application of non-intrusive Bayesian uncertainty quantification (UQ) methodology in multiphase (gas-solid) flows with experimental and simulation data, as part of our research efforts to determine the most suited approach for UQ of a bench scale fluidized bed gasifier. UQ analysis was first performed on the available experimental data. Global sensitivity analysis performed as part of the UQ analysis shows that among the three operating factors, steam to oxygen ratio has themore » most influence on syngas composition in the bench-scale gasifier experiments. An analysis for forward propagation of uncertainties was performed and results show that an increase in steam to oxygen ratio leads to an increase in H2 mole fraction and a decrease in CO mole fraction. These findings are in agreement with the ANOVA analysis performed in the reference experimental study. Another contribution in addition to the UQ analysis is the optimization-based approach to guide to identify next best set of additional experimental samples, should the possibility arise for additional experiments. Hence, the surrogate models constructed as part of the UQ analysis is employed to improve the information gain and make incremental recommendation, should the possibility to add more experiments arise. In the second step, series of simulations were carried out with the open-source computational fluid dynamics software MFiX to reproduce the experimental conditions, where three operating factors, i.e., coal flow rate, coal particle diameter, and steam-to-oxygen ratio, were systematically varied to understand their effect on the syngas composition. Bayesian UQ analysis was performed on the numerical results. As part of Bayesian UQ analysis, a global sensitivity analysis was performed based on the simulation results, which shows that the predicted syngas composition is strongly affected not only by the steam-to-oxygen ratio (which was observed in experiments as well) but also by variation in the coal flow rate and particle diameter (which was not observed in experiments). The carbon monoxide mole fraction is underpredicted at lower steam-to-oxygen ratios and overpredicted at higher steam-to-oxygen ratios. The opposite trend is observed for the carbon dioxide mole fraction. These discrepancies are attributed to either excessive segregation of the phases that leads to the fuel-rich or -lean regions or alternatively the selection of reaction models, where different reaction models and kinetics can lead to different syngas compositions throughout the gasifier. To improve quality of numerical models used, the effect that uncertainties in reaction models for gasification, char oxidation, carbon monoxide oxidation, and water gas shift will have on the syngas composition at different grid resolution, along with bed temperature were investigated. The global sensitivity analysis showed that among various reaction models employed for water gas shift, gasification, char oxidation, the choice of reaction model for water gas shift has the greatest influence on syngas composition, with gasification reaction model being second. Syngas composition also shows a small sensitivity to temperature of the bed. The hydrodynamic behavior of the bed did not change beyond grid spacing of 18 times the particle diameter. However, the syngas concentration continued to be affected by the grid resolution as low as 9 times the particle diameter. This is due to a better resolution of the phasic interface between the gases and solid that leads to stronger heterogeneous reactions. This report is a compilation of three manuscripts published in peer-reviewed journals for the series of studies mentioned above.« less

  5. Frame synchronization performance and analysis

    NASA Technical Reports Server (NTRS)

    Aguilera, C. S. R.; Swanson, L.; Pitt, G. H., III

    1988-01-01

    The analysis used to generate the theoretical models showing the performance of the frame synchronizer is described for various frame lengths and marker lengths at various signal to noise ratios and bit error tolerances.

  6. Application of clustering analysis in the prediction of photovoltaic power generation based on neural network

    NASA Astrophysics Data System (ADS)

    Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.

    2017-11-01

    In order to select effective samples in the large number of data of PV power generation years and improve the accuracy of PV power generation forecasting model, this paper studies the application of clustering analysis in this field and establishes forecasting model based on neural network. Based on three different types of weather on sunny, cloudy and rainy days, this research screens samples of historical data by the clustering analysis method. After screening, it establishes BP neural network prediction models using screened data as training data. Then, compare the six types of photovoltaic power generation prediction models before and after the data screening. Results show that the prediction model combining with clustering analysis and BP neural networks is an effective method to improve the precision of photovoltaic power generation.

  7. Nonlinear dynamic mechanism of vocal tremor from voice analysis and model simulations

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Jiang, Jack J.

    2008-09-01

    Nonlinear dynamic analysis and model simulations are used to study the nonlinear dynamic characteristics of vocal folds with vocal tremor, which can typically be characterized by low-frequency modulation and aperiodicity. Tremor voices from patients with disorders such as paresis, Parkinson's disease, hyperfunction, and adductor spasmodic dysphonia show low-dimensional characteristics, differing from random noise. Correlation dimension analysis statistically distinguishes tremor voices from normal voices. Furthermore, a nonlinear tremor model is proposed to study the vibrations of the vocal folds with vocal tremor. Fractal dimensions and positive Lyapunov exponents demonstrate the evidence of chaos in the tremor model, where amplitude and frequency play important roles in governing vocal fold dynamics. Nonlinear dynamic voice analysis and vocal fold modeling may provide a useful set of tools for understanding the dynamic mechanism of vocal tremor in patients with laryngeal diseases.

  8. Computer Analysis of Air Pollution from Highways, Streets, and Complex Interchanges

    DOT National Transportation Integrated Search

    1974-03-01

    A detailed computer analysis of air quality for a complex highway interchange was prepared, using an in-house version of the Environmental Protection Agency's Gaussian Highway Line Source Model. This analysis showed that the levels of air pollution n...

  9. Structural Identifiability of Dynamic Systems Biology Models

    PubMed Central

    Villaverde, Alejandro F.

    2016-01-01

    A powerful way of gaining insight into biological systems is by creating a nonlinear differential equation model, which usually contains many unknown parameters. Such a model is called structurally identifiable if it is possible to determine the values of its parameters from measurements of the model outputs. Structural identifiability is a prerequisite for parameter estimation, and should be assessed before exploiting a model. However, this analysis is seldom performed due to the high computational cost involved in the necessary symbolic calculations, which quickly becomes prohibitive as the problem size increases. In this paper we show how to analyse the structural identifiability of a very general class of nonlinear models by extending methods originally developed for studying observability. We present results about models whose identifiability had not been previously determined, report unidentifiabilities that had not been found before, and show how to modify those unidentifiable models to make them identifiable. This method helps prevent problems caused by lack of identifiability analysis, which can compromise the success of tasks such as experiment design, parameter estimation, and model-based optimization. The procedure is called STRIKE-GOLDD (STRuctural Identifiability taKen as Extended-Generalized Observability with Lie Derivatives and Decomposition), and it is implemented in a MATLAB toolbox which is available as open source software. The broad applicability of this approach facilitates the analysis of the increasingly complex models used in systems biology and other areas. PMID:27792726

  10. A Feature Fusion Based Forecasting Model for Financial Time Series

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  11. SeaWiFS technical report series. Volume 11: Analysis of selected orbit propagation models for the SeaWiFS mission

    NASA Technical Reports Server (NTRS)

    Patt, Frederick S.; Hoisington, Charles M.; Gregg, Watson W.; Coronado, Patrick L.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)

    1993-01-01

    An analysis of orbit propagation models was performed by the Mission Operations element of the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Project, which has overall responsibility for the instrument scheduling. The orbit propagators selected for this analysis are widely available general perturbations models. The analysis includes both absolute accuracy determination and comparisons of different versions of the models. The results show that all of the models tested meet accuracy requirements for scheduling and data acquisition purposes. For internal Project use the SGP4 propagator, developed by the North American Air Defense (NORAD) Command, has been selected. This model includes atmospheric drag effects and, therefore, provides better accuracy. For High Resolution Picture Transmission (HRPT) ground stations, which have less stringent accuracy requirements, the publicly available Brouwer-Lyddane models are recommended. The SeaWiFS Project will make available portable source code for a version of this model developed by the Data Capture Facility (DCF).

  12. State space model approach for forecasting the use of electrical energy (a case study on: PT. PLN (Persero) district of Kroya)

    NASA Astrophysics Data System (ADS)

    Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik

    2018-05-01

    Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.

  13. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  14. Integrative Analysis of High-throughput Cancer Studies with Contrasted Penalization

    PubMed Central

    Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Shia, BenChang; Ma, Shuangge

    2015-01-01

    In cancer studies with high-throughput genetic and genomic measurements, integrative analysis provides a way to effectively pool and analyze heterogeneous raw data from multiple independent studies and outperforms “classic” meta-analysis and single-dataset analysis. When marker selection is of interest, the genetic basis of multiple datasets can be described using the homogeneity model or the heterogeneity model. In this study, we consider marker selection under the heterogeneity model, which includes the homogeneity model as a special case and can be more flexible. Penalization methods have been developed in the literature for marker selection. This study advances from the published ones by introducing the contrast penalties, which can accommodate the within- and across-dataset structures of covariates/regression coefficients and, by doing so, further improve marker selection performance. Specifically, we develop a penalization method that accommodates the across-dataset structures by smoothing over regression coefficients. An effective iterative algorithm, which calls an inner coordinate descent iteration, is developed. Simulation shows that the proposed method outperforms the benchmark with more accurate marker identification. The analysis of breast cancer and lung cancer prognosis studies with gene expression measurements shows that the proposed method identifies genes different from those using the benchmark and has better prediction performance. PMID:24395534

  15. Assessment of Comprehensive Analysis Calculation of Airloads on Helicopter Rotors

    NASA Technical Reports Server (NTRS)

    Yeo, Hyeonsoo; Johnson, Wayne

    2004-01-01

    Blade section normal force and pitching moment were investigated for six rotors operating at transition and high speeds: H-34 in flight and wind tunnel, SA 330 (research Puma), SA 349/2, UH-60A full-scale and BO-105 model (HART-I). The measured data from flight and wind tunnel tests were compared with calculations obtained using the comprehensive analysis CAMRAD II. The calculations were made using two free wake models: rolled-up and multiple-trailer with consolidation models. At transition speed, there is fair to good agreement for the blade section normal force between the test data and analysis for the H-34, research Puma, and SA 349/2 with the rolled-up wake. The calculated airloads differ significantly from the measurements for the UH-60A and BO-105. Better correlation is obtained for the UH-60A and BO-105 by using the multiple-trailer with consolidation wake model. In the high speed condition, the analysis shows generally good agreement with the research Puma flight data in both magnitude and phase. However, poor agreement is obtained for the other rotors examined. The analysis shows that the aerodynamic tip design (chord length and quarter chord location) of the Puma has an important influence on the phase correlation.

  16. RT-PCR detection of Candida albicans ALS gene expression in the reconstituted human epithelium (RHE) model of oral candidiasis and in model biofilms.

    PubMed

    Green, Clayton B; Cheng, Georgina; Chandra, Jyotsna; Mukherjee, Pranab; Ghannoum, Mahmoud A; Hoyer, Lois L

    2004-02-01

    An RT-PCR assay was developed to analyse expression patterns of genes in the Candida albicans ALS (agglutinin-like sequence) family. Inoculation of a reconstituted human buccal epithelium (RHE) model of mucocutaneous candidiasis with strain SC5314 showed destruction of the epithelial layer by C. albicans and also formation of an upper fungal layer that had characteristics similar to a biofilm. RT-PCR analysis of total RNA samples extracted from C. albicans-inoculated buccal RHE showed that ALS1, ALS2, ALS3, ALS4, ALS5 and ALS9 were consistently detected over time as destruction of the RHE progressed. Detection of transcripts from ALS7, and particularly from ALS6, was more sporadic, but not associated with a strictly temporal pattern. The expression pattern of ALS genes in C. albicans cultures used to inoculate the RHE was similar to that observed in the RHE model, suggesting that contact of C. albicans with buccal RHE does little to alter ALS gene expression. RT-PCR analysis of RNA samples extracted from model denture and catheter biofilms showed similar gene expression patterns to the buccal RHE specimens. Results from the RT-PCR analysis of biofilm RNA specimens were consistent between various C. albicans strains during biofilm development and were comparable to gene expression patterns in planktonic cells. The RT-PCR assay described here will be useful for analysis of human clinical specimens and samples from other disease models. The method will provide further insight into the role of ALS genes and their encoded proteins in the diverse interactions between C. albicans and its host.

  17. Usefulness of high resolution coastal models for operational oil spill forecast: the Full City accident

    NASA Astrophysics Data System (ADS)

    Broström, G.; Carrasco, A.; Hole, L. R.; Dick, S.; Janssen, F.; Mattsson, J.; Berger, S.

    2011-06-01

    Oil spill modeling is considered to be an important decision support system (DeSS) useful for remedial action in case of accidents, as well as for designing the environmental monitoring system that is frequently set up after major accidents. Many accidents take place in coastal areas implying that low resolution basin scale ocean models is of limited use for predicting the trajectories of an oil spill. In this study, we target the oil spill in connection with the Full City accident on the Norwegian south coast and compare three different oil spill models for the area. The result of the analysis is that all models do a satisfactory job. The "standard" operational model for the area is shown to have severe flaws but including an analysis based on a higher resolution model (1.5 km resolution) for the area the model system show results that compare well with observations. The study also shows that an ensemble using three different models is useful when predicting/analyzing oil spill in coastal areas.

  18. Analysis of dengue fever risk using geostatistics model in bone regency

    NASA Astrophysics Data System (ADS)

    Amran, Stang, Mallongi, Anwar

    2017-03-01

    This research aim is to analysis of dengue fever risk based on Geostatistics model in Bone Regency. Risk levels of dengue fever are denoted by parameter of Binomial distribution. Effect of temperature, rainfalls, elevation, and larvae abundance are investigated through Geostatistics model. Bayesian hierarchical method is used in estimation process. Using dengue fever data in eleven locations this research shows that temperature and rainfall have significant effect of dengue fever risk in Bone regency.

  19. Non-linear Min protein interactions generate harmonics that signal mid-cell division in Escherichia coli

    PubMed Central

    Walsh, James C.; Angstmann, Christopher N.; Duggin, Iain G.

    2017-01-01

    The Min protein system creates a dynamic spatial pattern in Escherichia coli cells where the proteins MinD and MinE oscillate from pole to pole. MinD positions MinC, an inhibitor of FtsZ ring formation, contributing to the mid-cell localization of cell division. In this paper, Fourier analysis is used to decompose experimental and model MinD spatial distributions into time-dependent harmonic components. In both experiment and model, the second harmonic component is responsible for producing a mid-cell minimum in MinD concentration. The features of this harmonic are robust in both experiment and model. Fourier analysis reveals a close correspondence between the time-dependent behaviour of the harmonic components in the experimental data and model. Given this, each molecular species in the model was analysed individually. This analysis revealed that membrane-bound MinD dimer shows the mid-cell minimum with the highest contrast when averaged over time, carrying the strongest signal for positioning the cell division ring. This concurs with previous data showing that the MinD dimer binds to MinC inhibiting FtsZ ring formation. These results show that non-linear interactions of Min proteins are essential for producing the mid-cell positioning signal via the generation of second-order harmonic components in the time-dependent spatial protein distribution. PMID:29040283

  20. Beyond the scope of Free-Wilson analysis: building interpretable QSAR models with machine learning algorithms.

    PubMed

    Chen, Hongming; Carlsson, Lars; Eriksson, Mats; Varkonyi, Peter; Norinder, Ulf; Nilsson, Ingemar

    2013-06-24

    A novel methodology was developed to build Free-Wilson like local QSAR models by combining R-group signatures and the SVM algorithm. Unlike Free-Wilson analysis this method is able to make predictions for compounds with R-groups not present in a training set. Eleven public data sets were chosen as test cases for comparing the performance of our new method with several other traditional modeling strategies, including Free-Wilson analysis. Our results show that the R-group signature SVM models achieve better prediction accuracy compared with Free-Wilson analysis in general. Moreover, the predictions of R-group signature models are also comparable to the models using ECFP6 fingerprints and signatures for the whole compound. Most importantly, R-group contributions to the SVM model can be obtained by calculating the gradient for R-group signatures. For most of the studied data sets, a significant correlation with that of a corresponding Free-Wilson analysis is shown. These results suggest that the R-group contribution can be used to interpret bioactivity data and highlight that the R-group signature based SVM modeling method is as interpretable as Free-Wilson analysis. Hence the signature SVM model can be a useful modeling tool for any drug discovery project.

  1. Significance of the model considering mixed grain-size for inverse analysis of turbidites

    NASA Astrophysics Data System (ADS)

    Nakao, K.; Naruse, H.; Tokuhashi, S., Sr.

    2016-12-01

    A method for inverse analysis of turbidity currents is proposed for application to field observations. Estimation of initial condition of the catastrophic events from field observations has been important for sedimentological researches. For instance, there are various inverse analyses to estimate hydraulic conditions from topography observations of pyroclastic flows (Rossano et al., 1996), real-time monitored debris-flow events (Fraccarollo and Papa, 2000), tsunami deposits (Jaffe and Gelfenbaum, 2007) and ancient turbidites (Falcini et al., 2009). These inverse analyses need forward models and the most turbidity current models employ uniform grain-size particles. The turbidity currents, however, are the best characterized by variation of grain-size distribution. Though there are numerical models of mixed grain-sized particles, the models have difficulty in feasibility of application to natural examples because of calculating costs (Lesshaft et al., 2011). Here we expand the turbidity current model based on the non-steady 1D shallow-water equation at low calculation costs for mixed grain-size particles and applied the model to the inverse analysis. In this study, we compared two forward models considering uniform and mixed grain-size particles respectively. We adopted inverse analysis based on the Simplex method that optimizes the initial conditions (thickness, depth-averaged velocity and depth-averaged volumetric concentration of a turbidity current) with multi-point start and employed the result of the forward model [h: 2.0 m, U: 5.0 m/s, C: 0.01%] as reference data. The result shows that inverse analysis using the mixed grain-size model found the known initial condition of reference data even if the condition where the optimization started is deviated from the true solution, whereas the inverse analysis using the uniform grain-size model requires the condition in which the starting parameters for optimization must be in quite narrow range near the solution. The uniform grain-size model often reaches to local optimum condition that is significantly different from true solution. In conclusion, we propose a method of optimization based on the model considering mixed grain-size particles, and show its application to examples of turbidites in the Kiyosumi Formation, Boso Peninsula, Japan.

  2. Careful with Those Priors: A Note on Bayesian Estimation in Two-Parameter Logistic Item Response Theory Models

    ERIC Educational Resources Information Center

    Marcoulides, Katerina M.

    2018-01-01

    This study examined the use of Bayesian analysis methods for the estimation of item parameters in a two-parameter logistic item response theory model. Using simulated data under various design conditions with both informative and non-informative priors, the parameter recovery of Bayesian analysis methods were examined. Overall results showed that…

  3. How to Determine the Centre of Mass of Bodies from Image Modelling

    ERIC Educational Resources Information Center

    Dias, Marco Adriano; Carvalho, Paulo Simeão; Rodrigues, Marcelo

    2016-01-01

    Image modelling is a recent technique in physics education that includes digital tools for image treatment and analysis, such as digital stroboscopic photography (DSP) and video analysis software. It is commonly used to analyse the motion of objects. In this work we show how to determine the position of the centre of mass (CM) of objects with…

  4. Learning Factors Transfer Analysis: Using Learning Curve Analysis to Automatically Generate Domain Models

    ERIC Educational Resources Information Center

    Pavlik, Philip I. Jr.; Cen, Hao; Koedinger, Kenneth R.

    2009-01-01

    This paper describes a novel method to create a quantitative model of an educational content domain of related practice item-types using learning curves. By using a pairwise test to search for the relationships between learning curves for these item-types, we show how the test results in a set of pairwise transfer relationships that can be…

  5. Using normalization 3D model for automatic clinical brain quantative analysis and evaluation

    NASA Astrophysics Data System (ADS)

    Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping

    2003-05-01

    Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.

  6. Simultaneous quantification of actin monomer and filament dynamics with modeling-assisted analysis of photoactivation

    PubMed Central

    Kapustina, Maryna; Read, Tracy-Ann

    2016-01-01

    ABSTRACT Photoactivation allows one to pulse-label molecules and obtain quantitative data about their behavior. We have devised a new modeling-based analysis for photoactivatable actin experiments that simultaneously measures properties of monomeric and filamentous actin in a three-dimensional cellular environment. We use this method to determine differences in the dynamic behavior of β- and γ-actin isoforms, showing that both inhabit filaments that depolymerize at equal rates but that β-actin exists in a higher monomer-to-filament ratio. We also demonstrate that cofilin (cofilin 1) equally accelerates depolymerization of filaments made from both isoforms, but is only required to maintain the β-actin monomer pool. Finally, we used modeling-based analysis to assess actin dynamics in axon-like projections of differentiating neuroblastoma cells, showing that the actin monomer concentration is significantly depleted as the axon develops. Importantly, these results would not have been obtained using traditional half-time analysis. Given that parameters of the publicly available modeling platform can be adjusted to suit the experimental system of the user, this method can easily be used to quantify actin dynamics in many different cell types and subcellular compartments. PMID:27831495

  7. Multiple Component Event-Related Potential (mcERP) Estimation

    NASA Technical Reports Server (NTRS)

    Knuth, K. H.; Clanton, S. T.; Shah, A. S.; Truccolo, W. A.; Ding, M.; Bressler, S. L.; Trejo, L. J.; Schroeder, C. E.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We show how model-based estimation of the neural sources responsible for transient neuroelectric signals can be improved by the analysis of single trial data. Previously, we showed that a multiple component event-related potential (mcERP) algorithm can extract the responses of individual sources from recordings of a mixture of multiple, possibly interacting, neural ensembles. McERP also estimated single-trial amplitudes and onset latencies, thus allowing more accurate estimation of ongoing neural activity during an experimental trial. The mcERP algorithm is related to informax independent component analysis (ICA); however, the underlying signal model is more physiologically realistic in that a component is modeled as a stereotypic waveshape varying both in amplitude and onset latency from trial to trial. The result is a model that reflects quantities of interest to the neuroscientist. Here we demonstrate that the mcERP algorithm provides more accurate results than more traditional methods such as factor analysis and the more recent ICA. Whereas factor analysis assumes the sources are orthogonal and ICA assumes the sources are statistically independent, the mcERP algorithm makes no such assumptions thus allowing investigators to examine interactions among components by estimating the properties of single-trial responses.

  8. A theoretical analysis of the effect of thrust-related turbulence distortion on helicopter rotor low-frequency broadband noise

    NASA Technical Reports Server (NTRS)

    Williams, M.; Harris, W. L.

    1984-01-01

    The purpose of the analysis is to determine if inflow turbulence distortion may be a cause of experimentally observed changes in sound pressure levels when the rotor mean loading is varied. The effect of helicopter rotor mean aerodynamics on inflow turbulence is studied within the framework of the turbulence rapid distortion theory developed by Pearson (1959) and Deissler (1961). The distorted inflow turbulence is related to the resultant noise by conventional broadband noise theory. A comparison of the distortion model with experimental data shows that the theoretical model is unable to totally explain observed increases in model rotor sound pressures with increased rotor mean thrust. Comparison of full scale rotor data with the theoretical model shows that a shear-type distortion may explain decreasing sound pressure levels with increasing thrust.

  9. Community evolution mining and analysis in social network

    NASA Astrophysics Data System (ADS)

    Liu, Hongtao; Tian, Yuan; Liu, Xueyan; Jian, Jie

    2017-03-01

    With the development of digital and network technology, various social platforms emerge. These social platforms have greatly facilitated access to information, attracting more and more users. They use these social platforms every day to work, study and communicate, so every moment social platforms are generating massive amounts of data. These data can often be modeled as complex networks, making large-scale social network analysis possible. In this paper, the existing evolution classification model of community has been improved based on community evolution relationship over time in dynamic social network, and the Evolution-Tree structure is proposed which can show the whole life cycle of the community more clearly. The comparative test result shows that the improved model can excavate the evolution relationship of the community well.

  10. Properties of some statistics for AR-ARCH model with application to technical analysis

    NASA Astrophysics Data System (ADS)

    Huang, Xudong; Liu, Wei

    2009-03-01

    In this paper, we investigate some popular technical analysis indexes for AR-ARCH model as real stock market. Under the given conditions, we show that the corresponding statistics are asymptotically stationary and the law of large numbers hold for frequencies of the stock prices falling out normal scope of these technical analysis indexes under AR-ARCH, and give the rate of convergence in the case of nonstationary initial values, which give a mathematical rationale for these methods of technical analysis in supervising the security trends.

  11. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    NASA Astrophysics Data System (ADS)

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan

    2015-05-01

    An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.

  12. Insights into the structural characteristics and substrate binding analysis of chondroitin AC lyase (PsPL8A) from Pedobacter saltans.

    PubMed

    Rani, Aruna; Dhillon, Arun; Sharma, Kedar; Goyal, Arun

    2018-04-01

    The structure of chondroitin AC lyase (PsPL8A) of family 8 polysaccharide lyase was characterized. Modeled PsPL8A structure showed, it contains N-terminal (α/α) 6 incomplete toroidal fold and a layered β sandwich structure at C-terminal. Ramchandran plot displayed 98.5% residues in favoured and 1.2% in generously allowed region. Secondary structure of PsPL8A by CD revealed 27.31% α helices 22.7% β sheets and 49.9% random coils. Protein melting study showed, PsPL8A completely unfolds at 60°C. SAXS analysis showed, PsPL8A is fully folded in solution form. The ab initio derived dummy model of PsPL8A superposed well with its modeled structure excluding some α-helices and loop region. Structural superposition and docking analysis showed, N153, W105, H203, Y208, Y212, R266 and E349 were involved in catalysis. Mutants N153A, H203A, Y212F, R266A and E349A created by SDM revealed no residual activity. Isothermal titration calorimetry analysis of Y212F and H203A with C4S polysaccharide, showed moderate binding by Y212F (Ka=9.56±3.81×10 5 ) and no binding with H203A, showing active contribution of Y212 in substrate binding. Residues Y212 and H203 or R266 might act as general base and general acid respectively. Residues N153 and E349 are likely contributing in charge neutralization and stabilizing enolate anion intermediate during β-elimination. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A rumor transmission model with incubation in social networks

    NASA Astrophysics Data System (ADS)

    Jia, Jianwen; Wu, Wenjiang

    2018-02-01

    In this paper, we propose a rumor transmission model with incubation period and constant recruitment in social networks. By carrying out an analysis of the model, we study the stability of rumor-free equilibrium and come to the local stable condition of the rumor equilibrium. We use the geometric approach for ordinary differential equations for showing the global stability of the rumor equilibrium. And when ℜ0 = 1, the new model occurs a transcritical bifurcation. Furthermore, numerical simulations are used to support the analysis. At last, some conclusions are presented.

  14. Path analysis and multi-criteria decision making: an approach for multivariate model selection and analysis in health.

    PubMed

    Vasconcelos, A G; Almeida, R M; Nobre, F F

    2001-08-01

    This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

  15. Nonlinear stratospheric variability: multifractal de-trended fluctuation analysis and singularity spectra

    PubMed Central

    Domeisen, Daniela I. V.

    2016-01-01

    Characterizing the stratosphere as a turbulent system, temporal fluctuations often show different correlations for different time scales as well as intermittent behaviour that cannot be captured by a single scaling exponent. In this study, the different scaling laws in the long-term stratospheric variability are studied using multifractal de-trended fluctuation analysis (MF-DFA). The analysis is performed comparing four re-analysis products and different realizations of an idealized numerical model, isolating the role of topographic forcing and seasonal variability, as well as the absence of climate teleconnections and small-scale forcing. The Northern Hemisphere (NH) shows a transition of scaling exponents for time scales shorter than about 1 year, for which the variability is multifractal and scales in time with a power law corresponding to a red spectrum, to longer time scales, for which the variability is monofractal and scales in time with a power law corresponding to white noise. Southern Hemisphere (SH) variability also shows a transition at annual scales. The SH also shows a narrower dynamical range in multifractality than the NH, as seen in the generalized Hurst exponent and in the singularity spectra. The numerical integrations show that the models are able to reproduce the low-frequency variability but are not able to fully capture the shorter term variability of the stratosphere. PMID:27493560

  16. Isogeometric analysis: a powerful numerical tool for the elastic analysis of historical masonry arches

    NASA Astrophysics Data System (ADS)

    Cazzani, Antonio; Malagù, Marcello; Turco, Emilio

    2016-03-01

    We illustrate a numerical tool for analyzing plane arches such as those frequently used in historical masonry heritage. It is based on a refined elastic mechanical model derived from the isogeometric approach. In particular, geometry and displacements are modeled by means of non-uniform rational B-splines. After a brief introduction, outlining the basic assumptions of this approach and the corresponding modeling choices, several numerical applications to arches, which are typical of masonry structures, show the performance of this novel technique. These are discussed in detail to emphasize the advantage and potential developments of isogeometric analysis in the field of structural analysis of historical masonry buildings with complex geometries.

  17. Psychometric Properties and Validation of the Arabic Social Media Addiction Scale.

    PubMed

    Al-Menayes, Jamal

    2015-01-01

    This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world.

  18. Psychometric Properties and Validation of the Arabic Social Media Addiction Scale

    PubMed Central

    Al-Menayes, Jamal

    2015-01-01

    This study investigated the psychometric properties of the Arabic version of the SMAS. SMAS is a variant of IAT customized to measure addiction to social media instead of the Internet as a whole. Using a self-report instrument on a cross-sectional sample of undergraduate students, the results revealed the following. First, the exploratory factor analysis showed that a three-factor model fits the data well. Second, concurrent validity analysis showed the SMAS to be a valid measure of social media addiction. However, further studies and data should verify the hypothesized model. Finally, this study showed that the Arabic version of the SMAS is a valid and reliable instrument for use in measuring social media addiction in the Arab world. PMID:26347848

  19. Representative Model of the Learning Process in Virtual Spaces Supported by ICT

    ERIC Educational Resources Information Center

    Capacho, José

    2014-01-01

    This paper shows the results of research activities for building the representative model of the learning process in virtual spaces (e-Learning). The formal basis of the model are supported in the analysis of models of learning assessment in virtual spaces and specifically in Dembo´s teaching learning model, the systemic approach to evaluating…

  20. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  1. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    NASA Technical Reports Server (NTRS)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  2. Detection of Organophosphorus Pesticides with Colorimetry and Computer Image Analysis.

    PubMed

    Li, Yanjie; Hou, Changjun; Lei, Jincan; Deng, Bo; Huang, Jing; Yang, Mei

    2016-01-01

    Organophosphorus pesticides (OPs) represent a very important class of pesticides that are widely used in agriculture because of their relatively high-performance and moderate environmental persistence, hence the sensitive and specific detection of OPs is highly significant. Based on the inhibitory effect of acetylcholinesterase (AChE) induced by inhibitors, including OPs and carbamates, a colorimetric analysis was used for detection of OPs with computer image analysis of color density in CMYK (cyan, magenta, yellow and black) color space and non-linear modeling. The results showed that there was a gradually weakened trend of yellow intensity with the increase of the concentration of dichlorvos. The quantitative analysis of dichlorvos was achieved by Artificial Neural Network (ANN) modeling, and the results showed that the established model had a good predictive ability between training sets and predictive sets. Real cabbage samples containing dichlorvos were detected by colorimetry and gas chromatography (GC), respectively. The results showed that there was no significant difference between colorimetry and GC (P > 0.05). The experiments of accuracy, precision and repeatability revealed good performance for detection of OPs. AChE can also be inhibited by carbamates, and therefore this method has potential applications in real samples for OPs and carbamates because of high selectivity and sensitivity.

  3. Proximate analyses and predicting HHV of chars obtained from cocracking of petroleum vacuum residue with coal, plastics and biomass.

    PubMed

    Ahmaruzzaman, M

    2008-07-01

    Higher heating value (HHV) and analysis of chars obtained from cocracking of petroleum vacuum residue (XVR) with coal (SC), biomass (BG, CL) and plastics (PP, PS, BL) are important which define the energy content and determine the clean and efficient use of these chars. The main aim of the present study is to analyze the char obtained from cocracking in terms of their proximate analysis data and determination of the HHV of the chars. The char obtained from XVR+PP cocracking showed a HHV of 32.84 MJ/kg, whereas that from CL cracking showed a HHV of 18.52 MJ/kg. The experimentally determined heating values of the char samples obtained from cocracking have been correlated with the theoretical equation based on proximate analysis data. There exists a variety of correlations for predicting HHV from proximate analysis of fuels. Based upon proximate analysis data, the models were tested. The best results show coefficient of determination (R2) of 0.965 and average absolute and bias error of 3.07% and 0.41%, respectively. The heating values obtained from the model were in good agreement with that obtained by experiment. Proximate analysis of the chars obtained from the cocracking of XVR with coal, biomass and plastics showed that there exists a definite interaction of the reactive species, when they were cocracked together.

  4. Infiltration modeling guidelines for commercial building energy analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.

    This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less

  5. Stability Analysis Susceptible, Exposed, Infected, Recovered (SEIR) Model for Spread Model for Spread of Dengue Fever in Medan

    NASA Astrophysics Data System (ADS)

    Side, Syafruddin; Molliq Rangkuti, Yulita; Gerhana Pane, Dian; Setia Sinaga, Marlina

    2018-01-01

    Dengue fever is endemic disease which spread through vector, Aedes Aegypty. This disease is found more than 100 countries, such as, United State, Africa as well Asia, especially in country that have tropic climate. Mathematical modeling in this paper, discusses the speed of the spread of dengue fever. The model adopting divided over four classes, such as Susceptible (S), Exposed (E), Infected (I) and Recovered (R). SEIR model further analyzed to detect the re-breeding value based on the number reported case by dengue in Medan city. Analysis of the stability of the system in this study is asymptotically stable indicating a case of endemic and unstable that show cases the endemic cases. Simulation on the mathematical model of SEIR showed that require a very long time to produce infected humans will be free of dengue virus infection. This happens because of dengue virus infection that occurs continuously between human and vector populations.

  6. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  7. From the 2008 to the 2014 Crisis: Response of the Labor Market of Russia's Largest Cities

    ERIC Educational Resources Information Center

    Khmeleva, Galina A.; Bulavko, Olga A.

    2016-01-01

    The model of shift share analysis was improved to show that the foundation of economy's transition to industrially innovational type of development is created at the local level in case of developing countries. Analysis of structural shifts in 28 large cities in 2008-2014 showed that the perspective of industrially innovational development is yet…

  8. Analysis of Covariance: Is It the Appropriate Model to Study Change?

    ERIC Educational Resources Information Center

    Marston, Paul T., Borich, Gary D.

    The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…

  9. Comparison of 5 Different Rat Models to Establish a Standard Animal Model for Research Into Interstitial Cystitis.

    PubMed

    Song, Phil Hyun; Chun, So Young; Chung, Jae-Wook; Kim, Yeon Yong; Lee, Hyo Jung; Lee, Jun Nyung; Ha, Yun-Sok; Yoo, Eun Sang; Kwon, Tae Gyun; Kim, Jeongshik; Kim, Dae Hwan; Kim, Bum Soo

    2017-09-01

    We evaluated 5 different rat models using different agents in order to establish a standard animal model for interstitial cystitis (IC) in terms of the functional and pathologic characteristics of the bladder. Five IC models were generated in 8-week-old female Sprague-Dawley rats via transurethral instillation of 0.1M hydrogen chloride (HCl) or 3% acetic acid (AA), intraperitoneal injection of cyclophosphamide (CYP) or lipopolysaccharide (LPS), or subcutaneous injection of uroplakin II (UPK2). After generating the IC models, conscious cystometry was performed on days 3, 7, and 14. All rats were euthanized on day 14 and their bladders were obtained for histological and pro-inflammatory-related gene expression analysis. In the cystometric analysis, all experimental groups showed significantly decreased intercontraction intervals compared with the control group on day 3, but only the LPS and UPK groups maintained significantly shorter intercontraction intervals than the control group on day 14. The histological analysis revealed that areas with severe urothelial erosion (HCl, AA, and UPK) and hyperplasia (CYP and LPS), particularly in the UPK-treated bladders, showed a markedly increased infiltration of toluidine blue-stained mast cells and increased tissue fibrosis. In addition, significantly elevated expression of interleukin-1b, interleukin-6, myeloperoxidase, monocyte chemotactic protein 1, and Toll-like receptors 2 and 4 was observed in the UPK group compared to the other groups. Among the 5 different agents, the injection of UPK generated the most effective IC animal model, showing consequent urothelial barrier loss, inflammatory reaction, tissue fibrosis stimulation, and persistent hyperactive bladder.

  10. Numerical study of Asian dust transport during the springtime of 2001 simulated with the Chemical Weather Forecasting System (CFORS) model

    NASA Astrophysics Data System (ADS)

    Uno, Itsushi; Satake, Shinsuke; Carmichael, Gregory R.; Tang, Youhua; Wang, Zifa; Takemura, Toshihiko; Sugimoto, Nobuo; Shimizu, Atsushi; Murayama, Toshiyuki; Cahill, Thomas A.; Cliff, Steven; Uematsu, Mitsuo; Ohta, Sachio; Quinn, Patricia K.; Bates, Timothy S.

    2004-10-01

    The regional-scale aerosol transport model Chemical Weather Forecasting System (CFORS) is used for analysis of large-scale dust phenomena during the Asian Pacific Regional Characterization Experiment (ACE-Asia) intensive observation. Dust modeling results are examined with the surface weather reports, satellite-derived dust index (Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI)), Mie-scattering lidar observation, and surface aerosol observations. The CFORS dust results are shown to accurately reproduce many of the important observed features. Model analysis shows that the simulated dust vertical loading correlates well with TOMS AI and that the dust loading is transported with the meandering of the synoptic-scale temperature field at the 500-hPa level. Quantitative examination of aerosol optical depth shows that model predictions are within 20% difference of the lidar observations for the major dust episodes. The structure of the ACE-Asia Perfect Dust Storm, which occurred in early April, is clarified with the help of the CFORS model analysis. This storm consisted of two boundary layer components and one elevated dust (>6-km height) feature (resulting from the movement of two large low-pressure systems). Time variation of the CFORS dust fields shows the correct onset timing of the elevated dust for each observation site, but the model results tend to overpredict dust concentrations at lower latitude sites. The horizontal transport flux at 130°E longitude is examined, and the overall dust transport flux at 130°E during March-April is evaluated to be 55 Tg.

  11. Towards the use of Structural Loop Analysis to Study System Behaviour of Socio-Ecological Systems.

    NASA Astrophysics Data System (ADS)

    Abram, Joseph; Dyke, James

    2016-04-01

    Maintaining socio-ecological systems in desirable states is key to developing a growing economy, alleviating poverty and achieving a sustainable future. While the driving forces of an environmental system are often well known, the dynamics impacting these drivers can be hidden within a tangled structure of causal chains and feedback loops. A lack of understanding of a system's dynamic structure and its influence on a system's behaviour can cause unforeseen side-effects during model scenario testing and policy implementation. Structural Loop analysis of socio-ecological system models identifies dominant feedback structures during times of behavioural shift, allowing the user to monitor key influential drivers during model simulation. This work carries out Loop Eigenvalue Elasticity Analysis (LEEA) on three system dynamic models, exploring tipping points in lake systems undergoing eutrophication. The purpose is to explore the potential benefits and limitations of the technique in the field of socio-ecology. The LEEA technique shows promise for socio-ecological systems which undergo regime shifts or express oscillatory trends, but shows limited usefulness with large models. The results of this work highlight changes in feedback loop dominance, years prior to eutrophic tipping events in lake systems. LEEA could be used as an early warning signal to impending system changes, complementary to other known early warning signals. This approach could improve our understanding during critical times of a system's behaviour, changing how we approach model analysis and the way scenario testing and policy implementation are addressed in socio-ecological system models.

  12. Association between maternal, fetal and paternal MTHFR gene C677T and A1298C polymorphisms and risk of recurrent pregnancy loss: a comprehensive evaluation.

    PubMed

    Yang, Yi; Luo, Yunyao; Yuan, Jing; Tang, Yidan; Xiong, Lang; Xu, MangMang; Rao, XuDong; Liu, Hao

    2016-06-01

    Numerous studies have investigated the associations between methylenetetrahydrofolate reductase (MTHFR) gene C677T and A1298C polymorphisms and risk of recurrent pregnancy loss (RPL); however, the results remain controversial. The aim of this study is to drive a more precise estimation of association between MTHFR gene polymorphisms and risk of RPL. We searched PubMed, EMBASE, Cochrane library, Web of Science and China Knowledge Resource Integrated Database for papers on MTHFR gene C677T and A1298C polymorphisms and RPL risk. The pooled odds ratios (ORs) with 95 % confidence intervals (CIs) were used to assess the strength of association in the homozygous model, heterozygous model, dominant model, recessive model and an additive model. The software STATA (Version 13.0) was used for statistical analysis. Overall, 57 articles were included in the final meta-analysis. In maternal group the MTHFR C677T polymorphism showed pooled odds ratios for the homozygous comparison [OR = 2.285, 95 % CI (1.702, 3.067)] and the MTHFR A1298C polymorphism showed pooled odds ratios for recessive model [OR = 1.594, 95 % CI (1.136, 2.238)]. In fetal group the MTHFR C677T polymorphism showed pooled odds ratios for dominant model [OR = 1.037, 95 % CI (0.567, 1.894)] and the MTHFR A1298C polymorphism showed pooled odds ratios for dominant model [OR = 1.495, 95 % CI (1.102, 2.026)]. In summary, the results of our meta-analysis indicate that maternal and paternal MTHFR gene C677T and A1298C polymorphisms are associated with RPL. We also observed a significant association between fetal MTHFR A1298C polymorphism and RPL but not C677T.

  13. Next Steps in Bayesian Structural Equation Models: Comments on, Variations of, and Extensions to Muthen and Asparouhov (2012)

    ERIC Educational Resources Information Center

    Rindskopf, David

    2012-01-01

    Muthen and Asparouhov (2012) made a strong case for the advantages of Bayesian methodology in factor analysis and structural equation models. I show additional extensions and adaptations of their methods and show how non-Bayesians can take advantage of many (though not all) of these advantages by using interval restrictions on parameters. By…

  14. Marker-based or model-based RSA for evaluation of hip resurfacing arthroplasty? A clinical validation and 5-year follow-up.

    PubMed

    Lorenzen, Nina Dyrberg; Stilling, Maiken; Jakobsen, Stig Storgaard; Gustafson, Klas; Søballe, Kjeld; Baad-Hansen, Thomas

    2013-11-01

    The stability of implants is vital to ensure a long-term survival. RSA determines micro-motions of implants as a predictor of early implant failure. RSA can be performed as a marker- or model-based analysis. So far, CAD and RE model-based RSA have not been validated for use in hip resurfacing arthroplasty (HRA). A phantom study determined the precision of marker-based and CAD and RE model-based RSA on a HRA implant. In a clinical study, 19 patients were followed with stereoradiographs until 5 years after surgery. Analysis of double-examination migration results determined the clinical precision of marker-based and CAD model-based RSA, and at the 5-year follow-up, results of the total translation (TT) and the total rotation (TR) for marker- and CAD model-based RSA were compared. The phantom study showed that comparison of the precision (SDdiff) in marker-based RSA analysis was more precise than model-based RSA analysis in TT (p CAD < 0.001; p RE = 0.04) and TR (p CAD = 0.01; p RE < 0.001). The clinical precision (double examination in 8 patients) comparing the precision SDdiff was better evaluating the TT using the marker-based RSA analysis (p = 0.002), but showed no difference between the marker- and CAD model-based RSA analysis regarding the TR (p = 0.91). Comparing the mean signed values regarding the TT and the TR at the 5-year follow-up in 13 patients, the TT was lower (p = 0.03) and the TR higher (p = 0.04) in the marker-based RSA compared to CAD model-based RSA. The precision of marker-based RSA was significantly better than model-based RSA. However, problems with occluded markers lead to exclusion of many patients which was not a problem with model-based RSA. HRA were stable at the 5-year follow-up. The detection limit was 0.2 mm TT and 1° TR for marker-based and 0.5 mm TT and 1° TR for CAD model-based RSA for HRA.

  15. Simulating the IPOD, East Asian summer monsoon, and their relationships in CMIP5

    NASA Astrophysics Data System (ADS)

    Yu, Miao; Li, Jianping; Zheng, Fei; Wang, Xiaofan; Zheng, Jiayu

    2018-03-01

    This paper evaluates the simulation performance of the 37 coupled models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) with respect to the East Asian summer monsoon (EASM) and the Indo-Pacific warm pool and North Pacific Ocean dipole (IPOD) and also the interrelationships between them. The results show that the majority of the models are unable to accurately simulate the interannual variability and long-term trends of the EASM, and their simulations of the temporal and spatial variations of the IPOD are also limited. Further analysis showed that the correlation coefficients between the simulated and observed EASM index (EASMI) is proportional to those between the simulated and observed IPOD index (IPODI); that is, if the models have skills to simulate one of them then they will likely generate good simulations of another. Based on the above relationship, this paper proposes a conditional multi-model ensemble method (CMME) that eliminates those models without capability to simulate the IPOD and EASM when calculating the multi-model ensemble (MME). The analysis shows that, compared with the MME, this CMME method can significantly improve the simulations of the spatial and temporal variations of both the IPOD and EASM as well as their interrelationship, suggesting the potential for the CMME approach to be used in place of the MME method.

  16. An Illustration of a Longitudinal Cross-Lagged Design for Larger Structural Equation Models. Teacher's Corner.

    ERIC Educational Resources Information Center

    Burkholder, Gary J.; Harlow, Lisa L.

    2003-01-01

    Tested a model of HIV behavior risk, using a fully cross-lagged, longitudinal design to illustrate the analysis of larger structural equation models. Data from 527 women who completed a survey at three time points show excellent fit of the model to the data. (SLD)

  17. An Investment Model Analysis of Relationship Stability among Women Court-Mandated to Violence Interventions

    ERIC Educational Resources Information Center

    Rhatigan, Deborah L.; Moore, Todd M.; Stuart, Gregory L.

    2005-01-01

    This investigation examined relationship stability among 60 women court-mandated to violence interventions by applying a general model (i.e., Rusbult's 1980 Investment Model) to predict intentions to leave current relationships. As in past research, results showed that Investment Model predictions were supported such that court-mandated women who…

  18. Experimental Analysis of Diffraction Effects from a Segmented MEMS Deformable Mirror for a Closed Loop Adaptive Optics System

    DTIC Science & Technology

    2010-06-01

    different approaches were used to model MEMS OM as a grating in Zemax software. First, a 2D grating was directly modeled as a combination of two ID...method of modeling ~IEMS DM in Zemax was implemented by combining two ID gratings. Due to the fact that ZEl\\’IAX allows to easily use ID physical...optics shows thc far field diffractioll pattcrn, which in Zemax geometrical model shows up as distinct spots. each one corresponding to a specific

  19. Aeroelastic stability analyses of two counter rotating propfan designs for a cruise missile model

    NASA Technical Reports Server (NTRS)

    Mahajan, Aparajit J.; Lucero, John M.; Mehmed, Oral; Stefko, George L.

    1992-01-01

    A modal aeroelastic analysis combining structural and aerodynamic models is applied to counterrotating propfans to evaluate their structural integrity for wind-tunnel testing. The aeroelastic analysis code is an extension of the 2D analysis code called the Aeroelastic Stability and Response of Propulsion Systems. Rotational speed and freestream Mach number are the parameters for calculating the stability of the two blade designs with a modal method combining a finite-element structural model with 2D steady and unsteady cascade aerodynamic models. The model demonstrates convergence to the least stable aeroelastic mode, describes the effects of a nonuniform inflow, and permits the modification of geometry and rotation. The analysis shows that the propfan designs are suitable for the wind-tunnel test and confirms that the propfans should be flutter-free under the range of conditions of the testing.

  20. The browning value changes and spectral analysis on the Maillard reaction product from glucose and methionine model system

    NASA Astrophysics Data System (ADS)

    Al-Baarri, A. N.; Legowo, A. M.; Widayat

    2018-01-01

    D-glucose has been understood to provide the various effect on the reactivity in Maillard reaction resulting in the changes in physical performance of food product. Therefore this research was done to analyse physical appearance of Maillard reaction product made of D-glucose and methionine as a model system. The changes in browning value and spectral analysis model system were determined. The glucose-methionine model system was produced through the heating treatment at 50°C and RH 70% for 24 hours. The data were collected for every three hour using spectrophotometer. As result, browning value was elevated with the increase of heating time and remarkably high if compare to the D-glucose only. Furthermore, the spectral analysis showed that methionine turned the pattern of peak appearance. As conclusion, methionine raised the browning value and changed the pattern of spectral analysis in Maillard reaction model system.

  1. Invitation to Consumer Behavior Analysis

    ERIC Educational Resources Information Center

    Foxall, Gordon R.

    2010-01-01

    This article presents an introduction to consumer behavior analysis by describing the Behavioral Perspective Model of consumer choice and showing how research has, first, confirmed this framework and, second, opened up behavior analysis and behavioral economics to the study of consumer behavior in natural settings. It concludes with a discussion…

  2. Modeling the kinetics of survival of Staphylococcus aureus in regional yogurt from goat's milk.

    PubMed

    Bednarko-Młynarczyk, E; Szteyn, J; Białobrzewski, I; Wiszniewska-Łaszczych, A; Liedtke, K

    2015-01-01

    The aim of this study was to determine the kinetics of the survival of the test strain of Staphylococcus aureus in the product investigated. Yogurt samples were contaminated with S. aure to an initial level of 10(3)-10(4) cfu/g. The samples were then stored at four temperatures: 4, 6, 20, 22°C. During storage, the number of S. aureus forming colonies in a gram of yogurt was determined every two hours. Based on the results of the analysis culture the curves of survival were plotted. Three primary models were selected to describe the kinetics of changes in the count of bacteria: Cole's model, a modified model of Gompertz and the model of Baranyi and Roberts. Analysis of the model fit carried out based on the average values of Pearson's correlation coefficient, between the modeled and measured values, showed that the Cole's model had the worst fit. The modified Gompertz model showed the count of S. aureus as a negative value. These drawbacks were not observed in the model of Baranyi and Roberts. For this reason, this model best reflects the kinetics of changes in the number of staphylococci in yogurt.

  3. Dynamic competitive probabilistic principal components analysis.

    PubMed

    López-Rubio, Ezequiel; Ortiz-DE-Lazcano-Lobato, Juan Miguel

    2009-04-01

    We present a new neural model which extends the classical competitive learning (CL) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. The model also has the ability to learn the number of basis vectors required to represent the principal directions of each cluster, so it overcomes a drawback of most local PCA models, where the dimensionality of a cluster must be fixed a priori. Experimental results are presented to show the performance of the network with multispectral image data.

  4. Evaluation of the CFSv2 CMIP5 decadal predictions

    NASA Astrophysics Data System (ADS)

    Bombardi, Rodrigo J.; Zhu, Jieshun; Marx, Lawrence; Huang, Bohua; Chen, Hua; Lu, Jian; Krishnamurthy, Lakshmi; Krishnamurthy, V.; Colfescu, Ioana; Kinter, James L.; Kumar, Arun; Hu, Zeng-Zhen; Moorthi, Shrinivas; Tripp, Patrick; Wu, Xingren; Schneider, Edwin K.

    2015-01-01

    Retrospective decadal forecasts were undertaken using the Climate Forecast System version 2 (CFSv2) as part of Coupled Model Intercomparison Project 5. Decadal forecasts were performed separately by the National Center for Environmental Prediction (NCEP) and by the Center for Ocean-Land-Atmosphere Studies (COLA), with the centers using two different analyses for the ocean initial conditions the NCEP Climate Forecast System Reanalysis (CFSR) and the NEMOVAR-COMBINE analysis. COLA also examined the sensitivity to the inclusion of forcing by specified volcanic aerosols. Biases in the CFSv2 for both sets of initial conditions include cold midlatitude sea surface temperatures, and rapid melting of sea ice associated with warm polar oceans. Forecasts from the NEMOVAR-COMBINE analysis showed strong weakening of the Atlantic Meridional Overturning Circulation (AMOC), eventually approaching the weaker AMOC associated with CFSR. The decadal forecasts showed high predictive skill over the Indian, the western Pacific, and the Atlantic Oceans and low skill over the central and eastern Pacific. The volcanic forcing shows only small regional differences in predictability of surface temperature at 2m (T2m) in comparison to forecasts without volcanic forcing, especially over the Indian Ocean. An ocean heat content (OHC) budget analysis showed that the OHC has substantial memory, indicating potential for the decadal predictability of T2m; however, the model has a systematic drift in global mean OHC. The results suggest that the reduction of model biases may be the most productive path towards improving the model's decadal forecasts.

  5. Laboratory modeling and analysis of aircraft-lightning interactions

    NASA Technical Reports Server (NTRS)

    Turner, C. D.; Trost, T. F.

    1982-01-01

    Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.

  6. A reduced-order adaptive neuro-fuzzy inference system model as a software sensor for rapid estimation of five-day biochemical oxygen demand

    NASA Astrophysics Data System (ADS)

    Noori, Roohollah; Safavi, Salman; Nateghi Shahrokni, Seyyed Afshin

    2013-07-01

    The five-day biochemical oxygen demand (BOD5) is one of the key parameters in water quality management. In this study, a novel approach, i.e., reduced-order adaptive neuro-fuzzy inference system (ROANFIS) model was developed for rapid estimation of BOD5. In addition, an uncertainty analysis of adaptive neuro-fuzzy inference system (ANFIS) and ROANFIS models was carried out based on Monte-Carlo simulation. Accuracy analysis of ANFIS and ROANFIS models based on both developed discrepancy ratio and threshold statistics revealed that the selected ROANFIS model was superior. Pearson correlation coefficient (R) and root mean square error for the best fitted ROANFIS model were 0.96 and 7.12, respectively. Furthermore, uncertainty analysis of the developed models indicated that the selected ROANFIS had less uncertainty than the ANFIS model and accurately forecasted BOD5 in the Sefidrood River Basin. Besides, the uncertainty analysis also showed that bracketed predictions by 95% confidence bound and d-factor in the testing steps for the selected ROANFIS model were 94% and 0.83, respectively.

  7. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  8. Conclusion of LOD-score analysis for family data generated under two-locus models.

    PubMed Central

    Dizier, M. H.; Babron, M. C.; Clerget-Darpoux, F.

    1996-01-01

    The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. PMID:8651311

  9. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Active Control Design Applications

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1998-01-01

    This report describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind tunnel model for active control design and analysis applications. The model is formed by combining the equations of motion for the BACT wind tunnel model with actuator models and a model of wind tunnel turbulence. The primary focus of this report is the development of the equations of motion from first principles by using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated by making use of parameters obtained from both experiment and analysis. Comparisons between experimental and analytical data obtained from the numerical model show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind tunnel model. The equations of motion developed herein have been used to aid in the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  10. A dynamic factor model of the evaluation of the financial crisis in Turkey.

    PubMed

    Sezgin, F; Kinay, B

    2010-01-01

    Factor analysis has been widely used in economics and finance in situations where a relatively large number of variables are believed to be driven by few common causes of variation. Dynamic factor analysis (DFA) which is a combination of factor and time series analysis, involves autocorrelation matrices calculated from multivariate time series. Dynamic factor models were traditionally used to construct economic indicators, macroeconomic analysis, business cycles and forecasting. In recent years, dynamic factor models have become more popular in empirical macroeconomics. They have more advantages than other methods in various respects. Factor models can for instance cope with many variables without running into scarce degrees of freedom problems often faced in regression-based analysis. In this study, a model which determines the effect of the global crisis on Turkey is proposed. The main aim of the paper is to analyze how several macroeconomic quantities show an alteration before the evolution of the crisis and to decide if a crisis can be forecasted or not.

  11. Bayesian inference for joint modelling of longitudinal continuous, binary and ordinal events.

    PubMed

    Li, Qiuju; Pan, Jianxin; Belcher, John

    2016-12-01

    In medical studies, repeated measurements of continuous, binary and ordinal outcomes are routinely collected from the same patient. Instead of modelling each outcome separately, in this study we propose to jointly model the trivariate longitudinal responses, so as to take account of the inherent association between the different outcomes and thus improve statistical inferences. This work is motivated by a large cohort study in the North West of England, involving trivariate responses from each patient: Body Mass Index, Depression (Yes/No) ascertained with cut-off score not less than 8 at the Hospital Anxiety and Depression Scale, and Pain Interference generated from the Medical Outcomes Study 36-item short-form health survey with values returned on an ordinal scale 1-5. There are some well-established methods for combined continuous and binary, or even continuous and ordinal responses, but little work was done on the joint analysis of continuous, binary and ordinal responses. We propose conditional joint random-effects models, which take into account the inherent association between the continuous, binary and ordinal outcomes. Bayesian analysis methods are used to make statistical inferences. Simulation studies show that, by jointly modelling the trivariate outcomes, standard deviations of the estimates of parameters in the models are smaller and much more stable, leading to more efficient parameter estimates and reliable statistical inferences. In the real data analysis, the proposed joint analysis yields a much smaller deviance information criterion value than the separate analysis, and shows other good statistical properties too. © The Author(s) 2014.

  12. Verification of Orthogrid Finite Element Modeling Techniques

    NASA Technical Reports Server (NTRS)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  13. Emergent structures and understanding from a comparative uncertainty analysis of the FUSE rainfall-runoff modelling platform for >1,100 catchments

    NASA Astrophysics Data System (ADS)

    Freer, J. E.; Odoni, N. A.; Coxon, G.; Bloomfield, J.; Clark, M. P.; Greene, S.; Johnes, P.; Macleod, C.; Reaney, S. M.

    2013-12-01

    If we are to learn about catchments and their hydrological function then a range of analysis techniques can be proposed from analysing observations to building complex physically based models using detailed attributes of catchment characteristics. Decisions regarding which technique is fit for a specific purpose will depend on the data available, computing resources, and the underlying reasons for the study. Here we explore defining catchment function in a relatively general sense expressed via a comparison of multiple model structures within an uncertainty analysis framework. We use the FUSE (Framework for Understanding Structural Errors - Clark et al., 2008) rainfall-runoff modelling platform and the GLUE (Generalised Likelihood Uncertainty Estimation - Beven and Freer, 2001) uncertainty analysis framework. Using these techniques we assess two main outcomes: 1) Benchmarking our predictive capability using discharge performance metrics for a diverse range of catchments across the UK 2) evaluating emergent behaviour for each catchment and/or region expressed as ';best performing' model structures that may be equally plausible representations of catchment behaviour. We shall show how such comparative hydrological modelling studies show patterns of emergent behaviour linked both to seasonal responses and to different geoclimatic regions. These results have implications for the hydrological community regarding how models can help us learn about places as hypothesis testing tools. Furthermore we explore what the limits are to such an analysis when dealing with differing data quality and information content from ';pristine' to less well characterised and highly modified catchment domains. This research has been piloted in the UK as part of the Environmental Virtual Observatory programme (EVOp), funded by NERC to demonstrate the use of cyber-infrastructure and cloud computing resources to develop better methods of linking data and models and to support scenario analysis for research, policy and operational needs.

  14. Membrane Introduction Mass Spectrometry Combined with an Orthogonal Partial-Least Squares Calibration Model for Mixture Analysis.

    PubMed

    Li, Min; Zhang, Lu; Yao, Xiaolong; Jiang, Xingyu

    2017-01-01

    The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX), while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial-least squares (PLS) model shows that the orthogonal partial-least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP). Furthermore, the OPLS exhibits a good recovery of 73.86 - 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 - 4.87%. Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.

  15. Standardized residual as response function for order identification of multi input intervention analysis

    NASA Astrophysics Data System (ADS)

    Suhartono, Lee, Muhammad Hisyam; Rezeki, Sri

    2017-05-01

    Intervention analysis is a statistical model in the group of time series analysis which is widely used to describe the effect of an intervention caused by external or internal factors. An example of external factors that often occurs in Indonesia is a disaster, both natural or man-made disaster. The main purpose of this paper is to provide the results of theoretical studies on identification step for determining the order of multi inputs intervention analysis for evaluating the magnitude and duration of the impact of interventions on time series data. The theoretical result showed that the standardized residuals could be used properly as response function for determining the order of multi inputs intervention model. Then, these results are applied for evaluating the impact of a disaster on a real case in Indonesia, i.e. the magnitude and duration of the impact of the Lapindo mud on the volume of vehicles on the highway. Moreover, the empirical results showed that the multi inputs intervention model can describe and explain accurately the magnitude and duration of the impact of disasters on a time series data.

  16. Effect of anaerobic digestion on sequential pyrolysis kinetics of organic solid wastes using thermogravimetric analysis and distributed activation energy model.

    PubMed

    Li, Xiaowei; Mei, Qingqing; Dai, Xiaohu; Ding, Guoji

    2017-03-01

    Thermogravimetric analysis, Gaussian-fit-peak model (GFPM), and distributed activation energy model (DAEM) were firstly used to explore the effect of anaerobic digestion on sequential pyrolysis kinetic of four organic solid wastes (OSW). Results showed that the OSW weight loss mainly occurred in the second pyrolysis stage relating to organic matter decomposition. Compared with raw substrate, the weight loss of corresponding digestate was lower in the range of 180-550°C, but was higher in 550-900°C. GFPM analysis revealed that organic components volatized at peak temperatures of 188-263, 373-401 and 420-462°C had a faster degradation rate than those at 274-327°C during anaerobic digestion. DAEM analysis showed that anaerobic digestion had discrepant effects on activation energy for four OSW pyrolysis, possibly because of their different organic composition. It requires further investigation for the special organic matter, i.e., protein-like and carbohydrate-like groups, to confirm the assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Genetic polymorphism of MTHFR C677T and premature coronary artery disease susceptibility: A meta-analysis.

    PubMed

    Hou, Xiaowen; Chen, Xin; Shi, Jingpu

    2015-07-01

    The association between 5, 10-methylenetetrahydrofolate reductase (MTHFR) C677T gene polymorphism and premature coronary artery disease (PCAD) is controversial. To explore a more precise estimation of the association, a meta-analysis was conducted in the present study. The relevant studies were identified by searching PubMed, EMBASE, the Web of Science, Cochrane Collaboration Database, Chinese National Knowledge Infrastructure, Wanfang Database and China Biological Medicine up to November, 2014. The meta-analysis was performed by STATA 11. 21 studies with a total of 6912 subjects, including 2972 PCAD patients and 3940 controls. The pooled analysis showed that MTHFR C677T gene polymorphism was probably associated with PCAD (CT vs. CC: OR=1.13, 95% CI=1.01-1.27; dominant model: OR=1.16, 95% CI=1.04-1.29; recessive model: OR=1.19, 95% CI=1.00-1.40; allele analysis: OR=1.17, 95% CI=1.01-1.34). Subgroup analysis by plasma homocysteine concentration showed a significant association in the homocysteine >15μmol/L subgroup (CT vs. CC: OR=1.44, 95% CI=1.10-1.88; TT vs. CC: OR=2.51, 95% CI=1.12-5.63; dominant model: OR=1.51, 95% CI=1.16-1.96; recessive model: OR=2.33, 95% CI=1.05-5.20; allele analysis: OR=1.48, 95% CI=1.18-1.87). Subgroup analysis by continent displayed a significant association among the Asian population (CT vs. CC: OR=1.51, 95% CI=1.23-1.86; TT vs. CC: OR=2.81, 95% CI=1.87-4.23; dominant model: OR=1.65, 95% CI=1.35-2.01; recessive model: OR=2.22, 95% CI=1.53-3.21; allele analysis: OR=1.61, 95% CI=1.37-1.89). The statistical stability and reliability was demonstrated by sensitivity analysis and publication bias outcomes. In conclusion, the meta-analysis suggests that MTHFR C677T gene polymorphism may be associated with PCAD. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Analysis of Polder Polarization Measurements During Astex and Eucrex Experiments

    NASA Technical Reports Server (NTRS)

    Chen, Hui; Han, Qingyuan; Chou, Joyce; Welch, Ronald M.

    1997-01-01

    Polarization is more sensitive than intensity to cloud microstructure such as the particle size and shape, and multiple scattering does not wash out features in polarization as effectively as it does in the intensity. Polarization measurements, particularly in the near IR, are potentially a valuable tool for cloud identification and for studies of the microphysics of clouds. The POLDER instrument is designed to provide wide field of view bidirectional images in polarized light. During the ASTEX-SOFIA campaign on June 12th, 1992, over the Atlantic Ocean (near the Azores Islands), images of homogeneous thick stratocumulus cloud fields were acquired. During the EUCREX'94 (April, 1994) campaign, the POLDER instrument was flying over the region of Brittany (France), taking observations of cirrus clouds. This study involves model studies and data analysis of POLDER observations. Both models and data analysis show that POLDER can be used to detect cloud thermodynamic phases. Model results show that polarized reflection in the Lamda =0.86 micron band is sensitive to cloud droplet sizes but not to cloud optical thickness. Comparison between model and data analysis reveals that cloud droplet sizes during ASTEX are about 5 microns, which agrees very well with the results of in situ measurements (4-5 microns). Knowing the retrieved cloud droplet sizes, the total reflected intensity of the POLDER measurements then can be used to retrieve cloud optical thickness. The close agreement between data analysis and model results during ASTEX also suggests the homogeneity of the cloud layer during that campaign.

  19. On statistical analysis of factors affecting anthocyanin extraction from Ixora siamensis

    NASA Astrophysics Data System (ADS)

    Mat Nor, N. A.; Arof, A. K.

    2016-10-01

    This study focused on designing an experimental model in order to evaluate the influence of operative extraction parameters employed for anthocyanin extraction from Ixora siamensis on CIE color measurements (a*, b* and color saturation). Extractions were conducted at temperatures of 30, 55 and 80°C, soaking time of 60, 120 and 180 min using acidified methanol solvent with different trifluoroacetic acid (TFA) contents of 0.5, 1.75 and 3% (v/v). The statistical evaluation was performed by running analysis of variance (ANOVA) and regression calculation to investigate the significance of the generated model. Results show that the generated regression models adequately explain the data variation and significantly represented the actual relationship between the independent variables and the responses. Analysis of variance (ANOVA) showed high coefficient determination values (R2) of 0.9687 for a*, 0.9621 for b* and 0.9758 for color saturation, thus ensuring a satisfactory fit of the developed models with the experimental data. Interaction between TFA content and extraction temperature exhibited to the highest significant influence on CIE color parameter.

  20. Characterization of the Dynamics of Climate Systems and Identification of Missing Mechanisms Impacting the Long Term Predictive Capabilities of Global Climate Models Utilizing Dynamical Systems Approaches to the Analysis of Observed and Modeled Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhatt, Uma S.; Wackerbauer, Renate; Polyakov, Igor V.

    The goal of this research was to apply fractional and non-linear analysis techniques in order to develop a more complete characterization of climate change and variability for the oceanic, sea ice and atmospheric components of the Earth System. This research applied two measures of dynamical characteristics of time series, the R/S method of calculating the Hurst exponent and Renyi entropy, to observational and modeled climate data in order to evaluate how well climate models capture the long-term dynamics evident in observations. Fractional diffusion analysis was applied to ARGO ocean buoy data to quantify ocean transport. Self organized maps were appliedmore » to North Pacific sea level pressure and analyzed in ways to improve seasonal predictability for Alaska fire weather. This body of research shows that these methods can be used to evaluate climate models and shed light on climate mechanisms (i.e., understanding why something happens). With further research, these methods show promise for improving seasonal to longer time scale forecasts of climate.« less

  1. Comparative Analysis of VaR Estimation of Double Long-Memory GARCH Models: Empirical Analysis of China's Stock Market

    NASA Astrophysics Data System (ADS)

    Cao, Guangxi; Guo, Jianping; Xu, Lin

    GARCH models are widely used to model the volatility of financial assets and measure VaR. Based on the characteristics of long-memory and lepkurtosis and fat tail of stock market return series, we compared the ability of double long-memory GARCH models with skewed student-t-distribution to compute VaR, through the empirical analysis of Shanghai Composite Index (SHCI) and Shenzhen Component Index (SZCI). The results show that the ARFIMA-HYGARCH model performance better than others, and at less than or equal to 2.5 percent of the level of VaR, double long-memory GARCH models have stronger ability to evaluate in-sample VaRs in long position than in short position while there is a diametrically opposite conclusion for ability of out-of-sample VaR forecast.

  2. [Public health conceptual models and paradigms].

    PubMed

    Hernández-Girón, Carlos; Orozco-Núñez, Emanuel; Arredondo-López, Armando

    2012-01-01

    The epidemiological transition model proposed by Omhran at the beginning of the 1970s (decreased fecundity rate and increased life expectancy), together with modifications in lifestyles and diet, showed increased mortality due to chronically degenerative causes. This essay thus discusses and makes a comparative analysis of some currents of thought, taking as its common thread an analysis of epidemiological change identified in different eras or stages and relationships with some public health models or conceptual frameworks. Discussing public health paradigms leads to a historical recapitulation of conceptual models ranging from magical-religious conceptions to ecological and socio-medical models. M. Susser proposed 3 eras in this discipline's evolution in his speech on the future of the epidemiology. The epidemiological changes analysed through different approaches constitute elements of analysis that all models discussed in this essay include to delimit the contributions and variables so determining them.

  3. In vitro porcine blood-brain barrier model for permeability studies: pCEL-X software pKa(FLUX) method for aqueous boundary layer correction and detailed data analysis.

    PubMed

    Yusof, Siti R; Avdeef, Alex; Abbott, N Joan

    2014-12-18

    In vitro blood-brain barrier (BBB) models from primary brain endothelial cells can closely resemble the in vivo BBB, offering valuable models to assay BBB functions and to screen potential central nervous system drugs. We have recently developed an in vitro BBB model using primary porcine brain endothelial cells. The model shows expression of tight junction proteins and high transendothelial electrical resistance, evidence for a restrictive paracellular pathway. Validation studies using small drug-like compounds demonstrated functional uptake and efflux transporters, showing the suitability of the model to assay drug permeability. However, one limitation of in vitro model permeability measurement is the presence of the aqueous boundary layer (ABL) resulting from inefficient stirring during the permeability assay. The ABL can be a rate-limiting step in permeation, particularly for lipophilic compounds, causing underestimation of the permeability. If the ABL effect is ignored, the permeability measured in vitro will not reflect the permeability in vivo. To address the issue, we explored the combination of in vitro permeability measurement using our porcine model with the pKa(FLUX) method in pCEL-X software to correct for the ABL effect and allow a detailed analysis of in vitro (transendothelial) permeability data, Papp. Published Papp using porcine models generated by our group and other groups are also analyzed. From the Papp, intrinsic transcellular permeability (P0) is derived by simultaneous refinement using a weighted nonlinear regression, taking into account permeability through the ABL, paracellular permeability and filter restrictions on permeation. The in vitro P0 derived for 22 compounds (35 measurements) showed good correlation with P0 derived from in situ brain perfusion data (r(2)=0.61). The analysis also gave evidence for carrier-mediated uptake of naloxone, propranolol and vinblastine. The combination of the in vitro porcine model and the software analysis provides a useful tool to better predict BBB permeability in vivo and gain better mechanistic information about BBB permeation. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  4. A simulations approach for meta-analysis of genetic association studies based on additive genetic model.

    PubMed

    John, Majnu; Lencz, Todd; Malhotra, Anil K; Correll, Christoph U; Zhang, Jian-Ping

    2018-06-01

    Meta-analysis of genetic association studies is being increasingly used to assess phenotypic differences between genotype groups. When the underlying genetic model is assumed to be dominant or recessive, assessing the phenotype differences based on summary statistics, reported for individual studies in a meta-analysis, is a valid strategy. However, when the genetic model is additive, a similar strategy based on summary statistics will lead to biased results. This fact about the additive model is one of the things that we establish in this paper, using simulations. The main goal of this paper is to present an alternate strategy for the additive model based on simulating data for the individual studies. We show that the alternate strategy is far superior to the strategy based on summary statistics.

  5. Using Student Writing and Lexical Analysis to Reveal Student Thinking about the Role of Stop Codons in the Central Dogma

    PubMed Central

    Prevost, Luanna B.; Smith, Michelle K.; Knight, Jennifer K.

    2016-01-01

    Previous work has shown that students have persistent difficulties in understanding how central dogma processes can be affected by a stop codon mutation. To explore these difficulties, we modified two multiple-choice questions from the Genetics Concept Assessment into three open-ended questions that asked students to write about how a stop codon mutation potentially impacts replication, transcription, and translation. We then used computer-assisted lexical analysis combined with human scoring to categorize student responses. The lexical analysis models showed high agreement with human scoring, demonstrating that this approach can be successfully used to analyze large numbers of student written responses. The results of this analysis show that students’ ideas about one process in the central dogma can affect their thinking about subsequent and previous processes, leading to mixed models of conceptual understanding. PMID:27909016

  6. Four Bad Habits of Modern Psychologists

    PubMed Central

    Grice, James; Cota, Lisa; Taylor, Zachery; Garner, Samantha; Medellin, Eliwid; Vest, Adam

    2017-01-01

    Four data sets from studies included in the Reproducibility Project were re-analyzed to demonstrate a number of flawed research practices (i.e., “bad habits”) of modern psychology. Three of the four studies were successfully replicated, but re-analysis showed that in one study most of the participants responded in a manner inconsistent with the researchers’ theoretical model. In the second study, the replicated effect was shown to be an experimental confound, and in the third study the replicated statistical effect was shown to be entirely trivial. The fourth study was an unsuccessful replication, yet re-analysis of the data showed that questioning the common assumptions of modern psychological measurement can lead to novel techniques of data analysis and potentially interesting findings missed by traditional methods of analysis. Considered together, these new analyses show that while it is true replication is a key feature of science, causal inference, modeling, and measurement are equally important and perhaps more fundamental to obtaining truly scientific knowledge of the natural world. It would therefore be prudent for psychologists to confront the limitations and flaws in their current analytical methods and research practices. PMID:28805739

  7. Four Bad Habits of Modern Psychologists.

    PubMed

    Grice, James; Barrett, Paul; Cota, Lisa; Felix, Crystal; Taylor, Zachery; Garner, Samantha; Medellin, Eliwid; Vest, Adam

    2017-08-14

    Four data sets from studies included in the Reproducibility Project were re-analyzed to demonstrate a number of flawed research practices (i.e., "bad habits") of modern psychology. Three of the four studies were successfully replicated, but re-analysis showed that in one study most of the participants responded in a manner inconsistent with the researchers' theoretical model. In the second study, the replicated effect was shown to be an experimental confound, and in the third study the replicated statistical effect was shown to be entirely trivial. The fourth study was an unsuccessful replication, yet re-analysis of the data showed that questioning the common assumptions of modern psychological measurement can lead to novel techniques of data analysis and potentially interesting findings missed by traditional methods of analysis. Considered together, these new analyses show that while it is true replication is a key feature of science, causal inference, modeling, and measurement are equally important and perhaps more fundamental to obtaining truly scientific knowledge of the natural world. It would therefore be prudent for psychologists to confront the limitations and flaws in their current analytical methods and research practices.

  8. Structural modeling and analysis of an effluent treatment process for electroplating--a graph theoretic approach.

    PubMed

    Kumar, Abhishek; Clement, Shibu; Agrawal, V P

    2010-07-15

    An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.

  9. An efficient current-based logic cell model for crosstalk delay analysis

    NASA Astrophysics Data System (ADS)

    Nazarian, Shahin; Das, Debasish

    2013-04-01

    Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.

  10. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    PubMed

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  11. [State Recognition of Solid Fermentation Process Based on Near Infrared Spectroscopy with Adaboost and Spectral Regression Discriminant Analysis].

    PubMed

    Yu, Shuang; Liu, Guo-hai; Xia, Rong-sheng; Jiang, Hui

    2016-01-01

    In order to achieve the rapid monitoring of process state of solid state fermentation (SSF), this study attempted to qualitative identification of process state of SSF of feed protein by use of Fourier transform near infrared (FT-NIR) spectroscopy analysis technique. Even more specifically, the FT-NIR spectroscopy combined with Adaboost-SRDA-NN integrated learning algorithm as an ideal analysis tool was used to accurately and rapidly monitor chemical and physical changes in SSF of feed protein without the need for chemical analysis. Firstly, the raw spectra of all the 140 fermentation samples obtained were collected by use of Fourier transform near infrared spectrometer (Antaris II), and the raw spectra obtained were preprocessed by use of standard normal variate transformation (SNV) spectral preprocessing algorithm. Thereafter, the characteristic information of the preprocessed spectra was extracted by use of spectral regression discriminant analysis (SRDA). Finally, nearest neighbors (NN) algorithm as a basic classifier was selected and building state recognition model to identify different fermentation samples in the validation set. Experimental results showed as follows: the SRDA-NN model revealed its superior performance by compared with other two different NN models, which were developed by use of the feature information form principal component analysis (PCA) and linear discriminant analysis (LDA), and the correct recognition rate of SRDA-NN model achieved 94.28% in the validation set. In this work, in order to further improve the recognition accuracy of the final model, Adaboost-SRDA-NN ensemble learning algorithm was proposed by integrated the Adaboost and SRDA-NN methods, and the presented algorithm was used to construct the online monitoring model of process state of SSF of feed protein. Experimental results showed as follows: the prediction performance of SRDA-NN model has been further enhanced by use of Adaboost lifting algorithm, and the correct recognition rate of the Adaboost-SRDA-NN model achieved 100% in the validation set. The overall results demonstrate that SRDA algorithm can effectively achieve the spectral feature information extraction to the spectral dimension reduction in model calibration process of qualitative analysis of NIR spectroscopy. In addition, the Adaboost lifting algorithm can improve the classification accuracy of the final model. The results obtained in this work can provide research foundation for developing online monitoring instruments for the monitoring of SSF process.

  12. Credit scoring analysis using kernel discriminant

    NASA Astrophysics Data System (ADS)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  13. Confirmatory Factor Analysis of the Minnesota Nicotine Withdrawal Scale

    PubMed Central

    Toll, Benjamin A.; O’Malley, Stephanie S.; McKee, Sherry A.; Salovey, Peter; Krishnan-Sarin, Suchitra

    2008-01-01

    The authors examined the factor structure of the Minnesota Nicotine Withdrawal Scale (MNWS) using confirmatory factor analysis in clinical research samples of smokers trying to quit (n = 723). Three confirmatory factor analytic models, based on previous research, were tested with each of the 3 study samples at multiple points in time. A unidimensional model including all 8 MNWS items was found to be the best explanation of the data. This model produced fair to good internal consistency estimates. Additionally, these data revealed that craving should be included in the total score of the MNWS. Factor scores derived from this single-factor, 8-item model showed that increases in withdrawal were associated with poor smoking outcome for 2 of the clinical studies. Confirmatory factor analyses of change scores showed that the MNWS symptoms cohere as a syndrome over time. Future investigators should report a total score using all of the items from the MNWS. PMID:17563141

  14. Convex Arrhenius plots and their interpretation

    PubMed Central

    Truhlar, Donald G.; Kohen, Amnon

    2001-01-01

    This paper draws attention to selected experiments on enzyme-catalyzed reactions that show convex Arrhenius plots, which are very rare, and points out that Tolman's interpretation of the activation energy places a fundamental model-independent constraint on any detailed explanation of these reactions. The analysis presented here shows that in such systems, the rate coefficient as a function of energy is not just increasing more slowly than expected, it is actually decreasing. This interpretation of the data provides a constraint on proposed microscopic models, i.e., it requires that any successful model of a reaction with a convex Arrhenius plot should be consistent with the microcanonical rate coefficient being a decreasing function of energy. The implications and limitations of this analysis to interpreting enzyme mechanisms are discussed. This model-independent conclusion has broad applicability to all fields of kinetics, and we also draw attention to an analogy with diffusion in metastable fluids and glasses. PMID:11158559

  15. Verification and Validation of EnergyPlus Phase Change Material Model for Opaque Wall Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tabares-Velasco, P. C.; Christensen, C.; Bianchi, M.

    2012-08-01

    Phase change materials (PCMs) represent a technology that may reduce peak loads and HVAC energy consumption in buildings. A few building energy simulation programs have the capability to simulate PCMs, but their accuracy has not been completely tested. This study shows the procedure used to verify and validate the PCM model in EnergyPlus using a similar approach as dictated by ASHRAE Standard 140, which consists of analytical verification, comparative testing, and empirical validation. This process was valuable, as two bugs were identified and fixed in the PCM model, and version 7.1 of EnergyPlus will have a validated PCM model. Preliminarymore » results using whole-building energy analysis show that careful analysis should be done when designing PCMs in homes, as their thermal performance depends on several variables such as PCM properties and location in the building envelope.« less

  16. Understanding Coupling of Global and Diffuse Solar Radiation with Climatic Variability

    NASA Astrophysics Data System (ADS)

    Hamdan, Lubna

    Global solar radiation data is very important for wide variety of applications and scientific studies. However, this data is not readily available because of the cost of measuring equipment and the tedious maintenance and calibration requirements. Wide variety of models have been introduced by researchers to estimate and/or predict the global solar radiations and its components (direct and diffuse radiation) using other readily obtainable atmospheric parameters. The goal of this research is to understand the coupling of global and diffuse solar radiation with climatic variability, by investigating the relationships between these radiations and atmospheric parameters. For this purpose, we applied multilinear regression analysis on the data of National Solar Radiation Database 1991--2010 Update. The analysis showed that the main atmospheric parameters that affect the amount of global radiation received on earth's surface are cloud cover and relative humidity. Global radiation correlates negatively with both variables. Linear models are excellent approximations for the relationship between atmospheric parameters and global radiation. A linear model with the predictors total cloud cover, relative humidity, and extraterrestrial radiation is able to explain around 98% of the variability in global radiation. For diffuse radiation, the analysis showed that the main atmospheric parameters that affect the amount received on earth's surface are cloud cover and aerosol optical depth. Diffuse radiation correlates positively with both variables. Linear models are very good approximations for the relationship between atmospheric parameters and diffuse radiation. A linear model with the predictors total cloud cover, aerosol optical depth, and extraterrestrial radiation is able to explain around 91% of the variability in diffuse radiation. Prediction analysis showed that the linear models we fitted were able to predict diffuse radiation with efficiency of test adjusted R2 values equal to 0.93, using the data of total cloud cover, aerosol optical depth, relative humidity and extraterrestrial radiation. However, for prediction purposes, using nonlinear terms or nonlinear models might enhance the prediction of diffuse radiation.

  17. Comparison of Recent Modeled and Observed Trends in Total Column Ozone

    NASA Technical Reports Server (NTRS)

    Andersen, S. B.; Weatherhead, E. C.; Stevermer, A.; Austin, J.; Bruehl, C.; Fleming, E. L.; deGrandpre, J.; Grewe, V.; Isaksen, I.; Pitari, G.; hide

    2006-01-01

    We present a comparison of trends in total column ozone from 10 two-dimensional and 4 three-dimensional models and solar backscatter ultraviolet-2 (SBUV/2) satellite observations from the period 1979-2003. Trends for the past (1979-2000), the recent 7 years (1996-2003), and the future (2000-2050) are compared. We have analyzed the data using both simple linear trends and linear trends derived with a hockey stick method including a turnaround point in 1996. If the last 7 years, 1996-2003, are analyzed in isolation, the SBUV/2 observations show no increase in ozone, and most of the models predict continued depletion, although at a lesser rate. In sharp contrast to this, the recent data show positive trends for the Northern and the Southern Hemispheres if the hockey stick method with a turnaround point in 1996 is employed for the models and observations. The analysis shows that the observed positive trends in both hemispheres in the recent 7-year period are much larger than what is predicted by the models. The trends derived with the hockey stick method are very dependent on the values just before the turnaround point. The analysis of the recent data therefore depends greatly on these years being representative of the overall trend. Most models underestimate the past trends at middle and high latitudes. This is particularly pronounced in the Northern Hemisphere. Quantitatively, there is much disagreement among the models concerning future trends. However, the models agree that future trends are expected to be positive and less than half the magnitude of the past downward trends. Examination of the model projections shows that there is virtually no correlation between the past and future trends from the individual models.

  18. Comparison of recent modeled and observed trends in total column ozone

    NASA Astrophysics Data System (ADS)

    Andersen, S. B.; Weatherhead, E. C.; Stevermer, A.; Austin, J.; Brühl, C.; Fleming, E. L.; de Grandpré, J.; Grewe, V.; Isaksen, I.; Pitari, G.; Portmann, R. W.; Rognerud, B.; Rosenfield, J. E.; Smyshlyaev, S.; Nagashima, T.; Velders, G. J. M.; Weisenstein, D. K.; Xia, J.

    2006-01-01

    We present a comparison of trends in total column ozone from 10 two-dimensional and 4 three-dimensional models and solar backscatter ultraviolet-2 (SBUV/2) satellite observations from the period 1979-2003. Trends for the past (1979-2000), the recent 7 years (1996-2003), and the future (2000-2050) are compared. We have analyzed the data using both simple linear trends and linear trends derived with a hockey stick method including a turnaround point in 1996. If the last 7 years, 1996-2003, are analyzed in isolation, the SBUV/2 observations show no increase in ozone, and most of the models predict continued depletion, although at a lesser rate. In sharp contrast to this, the recent data show positive trends for the Northern and the Southern Hemispheres if the hockey stick method with a turnaround point in 1996 is employed for the models and observations. The analysis shows that the observed positive trends in both hemispheres in the recent 7-year period are much larger than what is predicted by the models. The trends derived with the hockey stick method are very dependent on the values just before the turnaround point. The analysis of the recent data therefore depends greatly on these years being representative of the overall trend. Most models underestimate the past trends at middle and high latitudes. This is particularly pronounced in the Northern Hemisphere. Quantitatively, there is much disagreement among the models concerning future trends. However, the models agree that future trends are expected to be positive and less than half the magnitude of the past downward trends. Examination of the model projections shows that there is virtually no correlation between the past and future trends from the individual models.

  19. On the numerical treatment of nonlinear source terms in reaction-convection equations

    NASA Technical Reports Server (NTRS)

    Lafon, A.; Yee, H. C.

    1992-01-01

    The objectives of this paper are to investigate how various numerical treatments of the nonlinear source term in a model reaction-convection equation can affect the stability of steady-state numerical solutions and to show under what conditions the conventional linearized analysis breaks down. The underlying goal is to provide part of the basic building blocks toward the ultimate goal of constructing suitable numerical schemes for hypersonic reacting flows, combustions and certain turbulence models in compressible Navier-Stokes computations. It can be shown that nonlinear analysis uncovers much of the nonlinear phenomena which linearized analysis is not capable of predicting in a model reaction-convection equation.

  20. An effective convolutional neural network model for Chinese sentiment analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Chen, Mengdong; Liu, Lianzhong; Wang, Yadong

    2017-06-01

    Nowadays microblog is getting more and more popular. People are increasingly accustomed to expressing their opinions on Twitter, Facebook and Sina Weibo. Sentiment analysis of microblog has received significant attention, both in academia and in industry. So far, Chinese microblog exploration still needs lots of further work. In recent years CNN has also been used to deal with NLP tasks, and already achieved good results. However, these methods ignore the effective use of a large number of existing sentimental resources. For this purpose, we propose a Lexicon-based Sentiment Convolutional Neural Networks (LSCNN) model focus on Weibo's sentiment analysis, which combines two CNNs, trained individually base on sentiment features and word embedding, at the fully connected hidden layer. The experimental results show that our model outperforms the CNN model only with word embedding features on microblog sentiment analysis task.

  1. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  2. PTPN22 1858C > T polymorphism and susceptibility to systemic lupus erythematosus: a meta-analysis update.

    PubMed

    de Lima, Suelen Cristina; Adelino, José Eduardo; Crovella, Sergio; de Azevedo Silva, Jaqueline; Sandrin-Garcia, Paula

    2017-11-01

    Studies performed in the past years showed PTNP22 1858 C > T (rs2476601) polymorphism as associated with systemic lupus erythematosus susceptibility, although conflicting findings are still found. In this context, a powerful statistical study, such as meta-analysis, is necessary to establish a consensus. The aim of this study was to evaluate association studies between the PTPN22 1858 C > T polymorphism and SLE by a meta-analysis update, including three recently published studies in the last three years. A total of 3868 SLE patients and 7458 healthy individuals were considered herein, enclosing 19 studies from Asian, American, European and Latin ethnic groups. Odds ratio (OR) was performed for allelic, dominant and recessive genetic models. Statistically significant association was found between the PTPN22 1858 C > T polymorphism and susceptibility to SLE in all inheritance models. Allelic genetic model data (OR = 1.54, 95% confidence interval (CI) = 1.38-1.72, p value=.000) shows that T allele confers increased SLE susceptibility. As well as recessive genetic model (OR = 2.04, 95% CI = 1.09-3.82, p value = .030) for T/T genotype. Instead, dominant genetic model shows that C/C genotype confers lower susceptibility for SLE development (OR = 0.62, 95% CI = 0.54-0.72, p value = .000). In addition, we provided an ethnicity-derived meta-analysis. The results showed association in Caucasian (OR = 1.47, p value = .000) and Latin (OR = 2.41, p value = .000) ethnic groups. However, rs2476601 polymorphism is not associated nor in Asian (OR= 1.31; p value = .54) and African (OR = 2.04; p value=.22) populations. In conclusion, present meta-analysis update confirms that T allele and T/T genotype in PTPN22 1858 C > T polymorphism confers SLE susceptibility, particular in Caucasian and Latin groups, suggesting PTPN22 1858 C > T as a potential genetic marker in SLE susceptibility.

  3. Parametric Model Based On Imputations Techniques for Partly Interval Censored Data

    NASA Astrophysics Data System (ADS)

    Zyoud, Abdallah; Elfaki, F. A. M.; Hrairi, Meftah

    2017-12-01

    The term ‘survival analysis’ has been used in a broad sense to describe collection of statistical procedures for data analysis. In this case, outcome variable of interest is time until an event occurs where the time to failure of a specific experimental unit might be censored which can be right, left, interval, and Partly Interval Censored data (PIC). In this paper, analysis of this model was conducted based on parametric Cox model via PIC data. Moreover, several imputation techniques were used, which are: midpoint, left & right point, random, mean, and median. Maximum likelihood estimate was considered to obtain the estimated survival function. These estimations were then compared with the existing model, such as: Turnbull and Cox model based on clinical trial data (breast cancer data), for which it showed the validity of the proposed model. Result of data set indicated that the parametric of Cox model proved to be more superior in terms of estimation of survival functions, likelihood ratio tests, and their P-values. Moreover, based on imputation techniques; the midpoint, random, mean, and median showed better results with respect to the estimation of survival function.

  4. Spectral stability of unitary network models

    NASA Astrophysics Data System (ADS)

    Asch, Joachim; Bourget, Olivier; Joye, Alain

    2015-08-01

    We review various unitary network models used in quantum computing, spectral analysis or condensed matter physics and establish relationships between them. We show that symmetric one-dimensional quantum walks are universal, as are CMV matrices. We prove spectral stability and propagation properties for general asymptotically uniform models by means of unitary Mourre theory.

  5. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  6. Cluster analysis of multiple planetary flow regimes

    NASA Technical Reports Server (NTRS)

    Mo, Kingtse; Ghil, Michael

    1987-01-01

    A modified cluster analysis method was developed to identify spatial patterns of planetary flow regimes, and to study transitions between them. This method was applied first to a simple deterministic model and second to Northern Hemisphere (NH) 500 mb data. The dynamical model is governed by the fully-nonlinear, equivalent-barotropic vorticity equation on the sphere. Clusters of point in the model's phase space are associated with either a few persistent or with many transient events. Two stationary clusters have patterns similar to unstable stationary model solutions, zonal, or blocked. Transient clusters of wave trains serve as way stations between the stationary ones. For the NH data, cluster analysis was performed in the subspace of the first seven empirical orthogonal functions (EOFs). Stationary clusters are found in the low-frequency band of more than 10 days, and transient clusters in the bandpass frequency window between 2.5 and 6 days. In the low-frequency band three pairs of clusters determine, respectively, EOFs 1, 2, and 3. They exhibit well-known regional features, such as blocking, the Pacific/North American (PNA) pattern and wave trains. Both model and low-pass data show strong bimodality. Clusters in the bandpass window show wave-train patterns in the two jet exit regions. They are related, as in the model, to transitions between stationary clusters.

  7. Parametric design and analysis on the landing gear of a planet lander using the response surface method

    NASA Astrophysics Data System (ADS)

    Zheng, Guang; Nie, Hong; Luo, Min; Chen, Jinbao; Man, Jianfeng; Chen, Chuanzhi; Lee, Heow Pueh

    2018-07-01

    The purpose of this paper is to obtain the design parameter-landing response relation for designing the configuration of the landing gear in a planet lander quickly. To achieve this, parametric studies on the landing gear are carried out using the response surface method (RSM), based on a single landing gear landing model validated by experimental results. According to the design of experiment (DOE) results of the landing model, the RS (response surface)-functions of the three crucial landing responses are obtained, and the sensitivity analysis (SA) of the corresponding parameters is performed. Also, two multi-objective optimizations designs on the landing gear are carried out. The analysis results show that the RS (response surface)-model performs well for the landing response design process, with a minimum fitting accuracy of 98.99%. The most sensitive parameters for the three landing response are the design size of the buffers, struts friction and the diameter of the bending beam. Moreover, the good agreement between the simulated model and RS-model results are obtained in two optimized designs, which show that the RS-model coupled with the FE (finite element)-method is an efficient method to obtain the design configuration of the landing gear.

  8. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  9. Static Aeroelastic Analysis with an Inviscid Cartesian Method

    NASA Technical Reports Server (NTRS)

    Rodriguez, David L.; Aftosmis, Michael J.; Nemec, Marian; Smith, Stephen C.

    2014-01-01

    An embedded-boundary, Cartesian-mesh flow solver is coupled with a three degree-of-freedom structural model to perform static, aeroelastic analysis of complex aircraft geometries. The approach solves a nonlinear, aerostructural system of equations using a loosely-coupled strategy. An open-source, 3-D discrete-geometry engine is utilized to deform a triangulated surface geometry according to the shape predicted by the structural model under the computed aerodynamic loads. The deformation scheme is capable of modeling large deflections and is applicable to the design of modern, very-flexible transport wings. The coupling interface is modular so that aerodynamic or structural analysis methods can be easily swapped or enhanced. After verifying the structural model with comparisons to Euler beam theory, two applications of the analysis method are presented as validation. The first is a relatively stiff, transport wing model which was a subject of a recent workshop on aeroelasticity. The second is a very flexible model recently tested in a low speed wind tunnel. Both cases show that the aeroelastic analysis method produces results in excellent agreement with experimental data.

  10. Optimum element density studies for finite-element thermal analysis of hypersonic aircraft structures

    NASA Technical Reports Server (NTRS)

    Ko, William L.; Olona, Timothy; Muramoto, Kyle M.

    1990-01-01

    Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.

  11. Do Items that Measure Self-Perceived Physical Appearance Function Differentially across Gender Groups? An Application of the MACS Model

    ERIC Educational Resources Information Center

    Gonzalez-Roma, Vicente; Tomas, Ines; Ferreres, Doris; Hernandez, Ana

    2005-01-01

    The aims of this study were to investigate whether the 6 items of the Physical Appearance Scale (Marsh, Richards, Johnson, Roche, & Tremayne, 1994) show differential item functioning (DIF) across gender groups of adolescents, and to show how this can be done using the multigroup mean and covariance structure (MG-MACS) analysis model. Two samples…

  12. Application Research of Fault Tree Analysis in Grid Communication System Corrective Maintenance

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Yang, Zhenwei; Kang, Mei

    2018-01-01

    This paper attempts to apply the fault tree analysis method to the corrective maintenance field of grid communication system. Through the establishment of the fault tree model of typical system and the engineering experience, the fault tree analysis theory is used to analyze the fault tree model, which contains the field of structural function, probability importance and so on. The results show that the fault tree analysis can realize fast positioning and well repairing of the system. Meanwhile, it finds that the analysis method of fault tree has some guiding significance to the reliability researching and upgrading f the system.

  13. An exact arithmetic toolbox for a consistent and reproducible structural analysis of metabolic network models

    PubMed Central

    Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie

    2014-01-01

    Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352

  14. The research on medical image classification algorithm based on PLSA-BOW model.

    PubMed

    Cao, C H; Cao, H L

    2016-04-29

    With the rapid development of modern medical imaging technology, medical image classification has become more important for medical diagnosis and treatment. To solve the existence of polysemous words and synonyms problem, this study combines the word bag model with PLSA (Probabilistic Latent Semantic Analysis) and proposes the PLSA-BOW (Probabilistic Latent Semantic Analysis-Bag of Words) model. In this paper we introduce the bag of words model in text field to image field, and build the model of visual bag of words model. The method enables the word bag model-based classification method to be further improved in accuracy. The experimental results show that the PLSA-BOW model for medical image classification can lead to a more accurate classification.

  15. Pitfalls in velocity analysis for strongly contrasting, layered media - Example from the Chalk Group, North Sea

    NASA Astrophysics Data System (ADS)

    Montazeri, Mahboubeh; Uldall, Anette; Moreau, Julien; Nielsen, Lars

    2018-02-01

    Knowledge about the velocity structure of the subsurface is critical in key seismic processing sequences, for instance, migration, depth conversion, and construction of initial P- and S-wave velocity models for full-waveform inversion. Therefore, the quality of subsurface imaging is highly dependent upon the quality of the seismic velocity analysis. Based on a case study from the Danish part of the North Sea, we show how interference caused by multiples, converted waves, and thin-layer effects may lead to incorrect velocity estimation, if such effects are not accounted for. Seismic wave propagation inside finely layered reservoir rocks dominated by chalk is described by two-dimensional finite-difference wave field simulation. The rock physical properties used for the modeling are based on an exploration well from the Halfdan field in the Danish sector of the North Sea. The modeling results are compared to seismic data from the study area. The modeling shows that interference of primaries with multiples, converted waves and thin-bed effects can give rise to strong anomalies in standard velocity analysis plots. Consequently, root-mean-square (RMS) velocity profiles may be erroneously picked. In our study area, such mis-picking can introduce errors in, for example, the thickness estimation of the layers near the base of the studied sedimentary strata by 11% to 26%. Tests show that front muting and bandpass filtering cannot significantly improve the quality of velocity analysis in our study. However, we notice that spiking deconvolution applied before velocity analysis may to some extent reduce the impact of interference and, therefore, reduce the risk of erroneous picking of the velocity function.

  16. Highly Reproducible Label Free Quantitative Proteomic Analysis of RNA Polymerase Complexes*

    PubMed Central

    Mosley, Amber L.; Sardiu, Mihaela E.; Pattenden, Samantha G.; Workman, Jerry L.; Florens, Laurence; Washburn, Michael P.

    2011-01-01

    The use of quantitative proteomics methods to study protein complexes has the potential to provide in-depth information on the abundance of different protein components as well as their modification state in various cellular conditions. To interrogate protein complex quantitation using shotgun proteomic methods, we have focused on the analysis of protein complexes using label-free multidimensional protein identification technology and studied the reproducibility of biological replicates. For these studies, we focused on three highly related and essential multi-protein enzymes, RNA polymerase I, II, and III from Saccharomyces cerevisiae. We found that label-free quantitation using spectral counting is highly reproducible at the protein and peptide level when analyzing RNA polymerase I, II, and III. In addition, we show that peptide sampling does not follow a random sampling model, and we show the need for advanced computational models to predict peptide detection probabilities. In order to address these issues, we used the APEX protocol to model the expected peptide detectability based on whole cell lysate acquired using the same multidimensional protein identification technology analysis used for the protein complexes. Neither method was able to predict the peptide sampling levels that we observed using replicate multidimensional protein identification technology analyses. In addition to the analysis of the RNA polymerase complexes, our analysis provides quantitative information about several RNAP associated proteins including the RNAPII elongation factor complexes DSIF and TFIIF. Our data shows that DSIF and TFIIF are the most highly enriched RNAP accessory factors in Rpb3-TAP purifications and demonstrate our ability to measure low level associated protein abundance across biological replicates. In addition, our quantitative data supports a model in which DSIF and TFIIF interact with RNAPII in a dynamic fashion in agreement with previously published reports. PMID:21048197

  17. Development of a Feedstock-to-Product Chain Model for Densified Biomass Pellets

    NASA Astrophysics Data System (ADS)

    McPherrin, Daniel

    The Q’Pellet is a spherical, torrefied biomass pellet currently under development. It aims to improve on the shortcomings of commercially available cylindrical white and torrefied pellets. A spreadsheet-based model was developed to allow for techno-economic analysis and simplified life cycle analysis of Q’Pellets, torrefied pellets and white pellets. A case study was developed to compare the production of white, torrefied and Q’Pellet production based on their internal rates of return and life cycle greenhouse gas emissions. The case study was based on a commercial scale plant built in Williams Lake BC with product delivery in Rotterdam, Netherlands. Q’Pellets had the highest modelled internal rate of return, at 12.7%, with white pellets at 11.1% and torrefied pellets at 8.0%. The simplified life cycle analysis showed that Q’Pellets had the lowest life cycle greenhouse gas emissions of the three products, 6.96 kgCO2eq/GJ, compared to 21.50 kgCO2eq/GJ for white pellets and 10.08 kgCO2eq/GJ for torrefied pellets. At these levels of life cycle greenhouse gas emissions, white pellets are above the maximum life cycle emissions to be considered sustainable under EU regulations. Sensitivity analysis was performed on the model by modifying input variables, and showed that white pellets are more sensitive to uncontrollable market variables, especially pellet sale prices, raw biomass prices and transportation costs. Monte Carlo analysis was also performed, which showed that white pellet production is less predictable and more likely to lead to a negative internal rate of return compared to Q’Pellet production.

  18. Multiobjective Collaborative Optimization of Systems of Systems

    DTIC Science & Technology

    2005-06-01

    K: HSC MODEL AND OPTIMIZATION DESCRIPTION ................................................ 157 APPENDIX L: HSC OPTIMIZATION CODE...7 0 Table 6. System Variables of FPF Data Set Showing Minimal HSC Impact on...App.E, F) Data Analysis Front ITS Model (App. I, J) Chap.] 1 ConclusionsSHSC Model (App. K, L) Cot[& HSC Model (App. M, NV) MoeJ Future Work Figure

  19. Dynamical Analysis in the Mathematical Modelling of Human Blood Glucose

    ERIC Educational Resources Information Center

    Bae, Saebyok; Kang, Byungmin

    2012-01-01

    We want to apply the geometrical method to a dynamical system of human blood glucose. Due to the educational importance of model building, we show a relatively general modelling process using observational facts. Next, two models of some concrete forms are analysed in the phase plane by means of linear stability, phase portrait and vector…

  20. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  1. Influence of Micro Threads Alteration on Osseointegration and Primary Stability of Implants: An FEA and In Vivo Analysis in Rabbits.

    PubMed

    Chowdhary, Ramesh; Halldin, Anders; Jimbo, Ryo; Wennerberg, Ann

    2015-06-01

    To describe the early bone tissue response to implants with and without micro threads designed to the full length of an oxidized titanium implant. A pair of two-dimensional finite element models was designed using a computer aided three-dimensional interactive application files of an implant model with micro threads in between macro threads and one without micro threads. Oxidized titanium implants with (test implants n=20) and without (control implants n=20) micro thread were prepared. A total of 12 rabbits were used and each received four implants. Insertion torque while implant placement and removal torque analysis after 4 weeks was performed in nine rabbits, and histomorphometric analysis in three rabbits, respectively. Finite element analysis showed less stress accumulation in test implant models with 31Mpa when compared with 62.2 Mpa in control implant model. Insertion and removal torque analysis did not show any statistical significance between the two implant designs. At 4 weeks, there was a significant difference between the two groups in the percentage of new bone volume and bone-to-implant contact in the femur (p< .05); however, not in the tibia. The effect of micro threads was prominent in the femur suggesting that micro threads promote bone formation. The stress distribution supported by the micro threads was especially effective in the cancellous bone. © 2013 Wiley Periodicals, Inc.

  2. Stability Analysis of the Slowed-Rotor Compound Helicopter Configuration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne; Floros, Matthew W.

    2004-01-01

    The stability and control of rotors at high advance ratio are considered. Teetering, articulated, gimbaled, and rigid hub types are considered for a compound helicopter (rotor and fixed wing). Stability predictions obtained using an analytical rigid flapping blade analysis, a rigid blade CAMRAD II model, and an elastic blade CAMRAD II model are compared. For the flapping blade analysis, the teetering rotor is the most stable, 5howing no instabilities up to an advance ratio of 3 and a Lock number of 18. With an elastic blade model, the teetering rotor is unstable at an advance ratio of 1.5. Analysis of the trim controls and blade flapping shows that for small positive collective pitch, trim can be maintained without excessive control input or flapping angles.

  3. Factors predicting early postpartum glucose intolerance in Japanese women with gestational diabetes mellitus: decision-curve analysis.

    PubMed

    Kondo, M; Nagao, Y; Mahbub, M H; Tanabe, T; Tanizawa, Y

    2018-04-29

    To identify factors predicting early postpartum glucose intolerance in Japanese women with gestational diabetes mellitus, using decision-curve analysis. A retrospective cohort study was performed. The participants were 123 Japanese women with gestational diabetes who underwent 75-g oral glucose tolerance tests at 8-12 weeks after delivery. They were divided into a glucose intolerance and a normal glucose tolerance group based on postpartum oral glucose tolerance test results. Analysis of the pregnancy oral glucose tolerance test results showed predictive factors for postpartum glucose intolerance. We also evaluated the clinical usefulness of the prediction model based on decision-curve analysis. Of 123 women, 78 (63.4%) had normoglycaemia and 45 (36.6%) had glucose intolerance. Multivariable logistic regression analysis showed insulinogenic index/fasting immunoreactive insulin and summation of glucose levels, assessed during pregnancy oral glucose tolerance tests (total glucose), to be independent risk factors for postpartum glucose intolerance. Evaluating the regression models, the best discrimination (area under the curve 0.725) was obtained using the basic model (i.e. age, family history of diabetes, BMI ≥25 kg/m 2 and use of insulin during pregnancy) plus insulinogenic index/fasting immunoreactive insulin <1.1. Decision-curve analysis showed that combining insulinogenic index/fasting immunoreactive insulin <1.1 with basic clinical information resulted in superior net benefits for prediction of postpartum glucose intolerance. Insulinogenic index/fasting immunoreactive insulin calculated using oral glucose tolerance test results during pregnancy is potentially useful for predicting early postpartum glucose intolerance in Japanese women with gestational diabetes. © 2018 Diabetes UK.

  4. Evaluating Measurement of Dynamic Constructs: Defining a Measurement Model of Derivatives

    PubMed Central

    Estabrook, Ryne

    2015-01-01

    While measurement evaluation has been embraced as an important step in psychological research, evaluating measurement structures with longitudinal data is fraught with limitations. This paper defines and tests a measurement model of derivatives (MMOD), which is designed to assess the measurement structure of latent constructs both for analyses of between-person differences and for the analysis of change. Simulation results indicate that MMOD outperforms existing models for multivariate analysis and provides equivalent fit to data generation models. Additional simulations show MMOD capable of detecting differences in between-person and within-person factor structures. Model features, applications and future directions are discussed. PMID:24364383

  5. The establishment and external validation of NIR qualitative analysis model for waste polyester-cotton blend fabrics.

    PubMed

    Li, Feng; Li, Wen-Xia; Zhao, Guo-Liang; Tang, Shi-Jun; Li, Xue-Jiao; Wu, Hong-Mei

    2014-10-01

    A series of 354 polyester-cotton blend fabrics were studied by the near-infrared spectra (NIRS) technology, and a NIR qualitative analysis model for different spectral characteristics was established by partial least squares (PLS) method combined with qualitative identification coefficient. There were two types of spectrum for dying polyester-cotton blend fabrics: normal spectrum and slash spectrum. The slash spectrum loses its spectral characteristics, which are effected by the samples' dyes, pigments, matting agents and other chemical additives. It was in low recognition rate when the model was established by the total sample set, so the samples were divided into two types of sets: normal spectrum sample set and slash spectrum sample set, and two NIR qualitative analysis models were established respectively. After the of models were established the model's spectral region, pretreatment methods and factors were optimized based on the validation results, and the robustness and reliability of the model can be improved lately. The results showed that the model recognition rate was improved greatly when they were established respectively, the recognition rate reached up to 99% when the two models were verified by the internal validation. RC (relation coefficient of calibration) values of the normal spectrum model and slash spectrum model were 0.991 and 0.991 respectively, RP (relation coefficient of prediction) values of them were 0.983 and 0.984 respectively, SEC (standard error of calibration) values of them were 0.887 and 0.453 respectively, SEP (standard error of prediction) values of them were 1.131 and 0.573 respectively. A series of 150 bounds samples reached used to verify the normal spectrum model and slash spectrum model and the recognition rate reached up to 91.33% and 88.00% respectively. It showed that the NIR qualitative analysis model can be used for identification in the recycle site for the polyester-cotton blend fabrics.

  6. Structural dynamics of shroudless, hollow fan blades with composite in-lays

    NASA Technical Reports Server (NTRS)

    Aiello, R. A.; Hirschbein, M. S.; Chamis, C. C.

    1982-01-01

    Structural and dynamic analyses are presented for a shroudless, hollow titanium fan blade proposed for future use in aircraft turbine engines. The blade was modeled and analyzed using the composite blade structural analysis computer program (COBSTRAN); an integrated program consisting of mesh generators, composite mechanics codes, NASTRAN, and pre- and post-processors. Vibration and impact analyses are presented. The vibration analysis was conducted with COBSTRAN. Results show the effect of the centrifugal force field on frequencies, twist, and blade camber. Bird impact analysis was performed with the multi-mode blade impact computer program. This program uses the geometric model and modal analysis from the COBSTRAN vibration analysis to determine the gross impact response of the fan blades to bird strikes. The structural performance of this blade is also compared to a blade of similar design but with composite in-lays on the outer surface. Results show that the composite in-lays can be selected (designed) to substantially modify the mechanical performance of the shroudless, hollow fan blade.

  7. Sensitivity analysis of a new dual-porosity hydroloigcal model coupled with the SOSlope model for the numerical simulations of rainfall triggered shallow landslides.

    NASA Astrophysics Data System (ADS)

    Schwarz, Massimiliano; Cohen, Denis

    2017-04-01

    Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.

  8. Vertical structure and physical processes of the Madden-Julian Oscillation: Biases and uncertainties at short range

    DOE PAGES

    Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; ...

    2015-05-26

    We present an analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models as part of the “Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)” project. A lead time of 12–36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests thatmore » the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. In conclusion, the wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. Additionally, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.« less

  9. Lytic to temperate switching of viral communities

    NASA Astrophysics Data System (ADS)

    Knowles, B.; Silveira, C. B.; Bailey, B. A.; Barott, K.; Cantu, V. A.; Cobián-Güemes, A. G.; Coutinho, F. H.; Dinsdale, E. A.; Felts, B.; Furby, K. A.; George, E. E.; Green, K. T.; Gregoracci, G. B.; Haas, A. F.; Haggerty, J. M.; Hester, E. R.; Hisakawa, N.; Kelly, L. W.; Lim, Y. W.; Little, M.; Luque, A.; McDole-Somera, T.; McNair, K.; de Oliveira, L. S.; Quistad, S. D.; Robinett, N. L.; Sala, E.; Salamon, P.; Sanchez, S. E.; Sandin, S.; Silva, G. G. Z.; Smith, J.; Sullivan, C.; Thompson, C.; Vermeij, M. J. A.; Youle, M.; Young, C.; Zgliczynski, B.; Brainard, R.; Edwards, R. A.; Nulton, J.; Thompson, F.; Rohwer, F.

    2016-03-01

    Microbial viruses can control host abundances via density-dependent lytic predator-prey dynamics. Less clear is how temperate viruses, which coexist and replicate with their host, influence microbial communities. Here we show that virus-like particles are relatively less abundant at high host densities. This suggests suppressed lysis where established models predict lytic dynamics are favoured. Meta-analysis of published viral and microbial densities showed that this trend was widespread in diverse ecosystems ranging from soil to freshwater to human lungs. Experimental manipulations showed viral densities more consistent with temperate than lytic life cycles at increasing microbial abundance. An analysis of 24 coral reef viromes showed a relative increase in the abundance of hallmark genes encoded by temperate viruses with increased microbial abundance. Based on these four lines of evidence, we propose the Piggyback-the-Winner model wherein temperate dynamics become increasingly important in ecosystems with high microbial densities; thus ‘more microbes, fewer viruses’.

  10. Modeling of human operator dynamics in simple manual control utilizing time series analysis. [tracking (position)

    NASA Technical Reports Server (NTRS)

    Agarwal, G. C.; Osafo-Charles, F.; Oneill, W. D.; Gottlieb, G. L.

    1982-01-01

    Time series analysis is applied to model human operator dynamics in pursuit and compensatory tracking modes. The normalized residual criterion is used as a one-step analytical tool to encompass the processes of identification, estimation, and diagnostic checking. A parameter constraining technique is introduced to develop more reliable models of human operator dynamics. The human operator is adequately modeled by a second order dynamic system both in pursuit and compensatory tracking modes. In comparing the data sampling rates, 100 msec between samples is adequate and is shown to provide better results than 200 msec sampling. The residual power spectrum and eigenvalue analysis show that the human operator is not a generator of periodic characteristics.

  11. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  12. Noise Reduction Design of the Volute for a Centrifugal Compressor

    NASA Astrophysics Data System (ADS)

    Song, Zhen; Wen, Huabing; Hong, Liangxing; Jin, Yudong

    2017-08-01

    In order to effectively control the aerodynamic noise of a compressor, this paper takes into consideration a marine exhaust turbocharger compressor as a research object. According to the different design concept of volute section, tongue and exit cone, six different volute models were established. The finite volume method is used to calculate the flow field, whiles the finite element method is used for the acoustic calculation. Comparison and analysis of different structure designs from three aspects: noise level, isentropic efficiency and Static pressure recovery coefficient. The results showed that under the concept of volute section model 1 yielded the best result, under the concept of tongue analysis model 3 yielded the best result and finally under exit cone analysis model 6 yielded the best results.

  13. Landsat test of diffuse reflectance models for aquatic suspended solids measurement

    NASA Technical Reports Server (NTRS)

    Munday, J. C., Jr.; Alfoldi, T. T.

    1979-01-01

    Landsat radiance data were used to test mathematical models relating diffuse reflectance to aquatic suspended solids concentration. Digital CCT data for Landsat passes over the Bay of Fundy, Nova Scotia were analyzed on a General Electric Co. Image 100 multispectral analysis system. Three data sets were studied separately and together in all combinations with and without solar angle correction. Statistical analysis and chromaticity analysis show that a nonlinear relationship between Landsat radiance and suspended solids concentration is better at curve-fitting than a linear relationship. In particular, the quasi-single-scattering diffuse reflectance model developed by Gordon and coworkers is corroborated. The Gordon model applied to 33 points of MSS 5 data combined from three dates produced r = 0.98.

  14. Sparse Group Penalized Integrative Analysis of Multiple Cancer Prognosis Datasets

    PubMed Central

    Liu, Jin; Huang, Jian; Xie, Yang; Ma, Shuangge

    2014-01-01

    SUMMARY In cancer research, high-throughput profiling studies have been extensively conducted, searching for markers associated with prognosis. Because of the “large d, small n” characteristic, results generated from the analysis of a single dataset can be unsatisfactory. Recent studies have shown that integrative analysis, which simultaneously analyzes multiple datasets, can be more effective than single-dataset analysis and classic meta-analysis. In most of existing integrative analysis, the homogeneity model has been assumed, which postulates that different datasets share the same set of markers. Several approaches have been designed to reinforce this assumption. In practice, different datasets may differ in terms of patient selection criteria, profiling techniques, and many other aspects. Such differences may make the homogeneity model too restricted. In this study, we assume the heterogeneity model, under which different datasets are allowed to have different sets of markers. With multiple cancer prognosis datasets, we adopt the AFT (accelerated failure time) model to describe survival. This model may have the lowest computational cost among popular semiparametric survival models. For marker selection, we adopt a sparse group MCP (minimax concave penalty) approach. This approach has an intuitive formulation and can be computed using an effective group coordinate descent algorithm. Simulation study shows that it outperforms the existing approaches under both the homogeneity and heterogeneity models. Data analysis further demonstrates the merit of heterogeneity model and proposed approach. PMID:23938111

  15. Performance analysis of mini-propellers based on FlightGear

    NASA Astrophysics Data System (ADS)

    Vogeltanz, Tomáš

    2016-06-01

    This paper presents a performance analysis of three mini-propellers based on the FlightGear flight simulator. Although a basic propeller analysis has to be performed before the use of FlightGear, for a complex and more practical performance analysis, it is advantageous to use a propeller model in cooperation with a particular aircraft model. This approach may determine whether the propeller has sufficient quality in respect of aircraft requirements. In the first section, the software used for the analysis is illustrated. Then, the parameters of the analyzed mini-propellers and the tested UAV are described. Finally, the main section shows and discusses the results of the performance analysis of the mini-propellers.

  16. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses.

    PubMed

    Jansen, Jeroen J; van Dam, Nicole M; Hoefsloot, Huub C J; Smilde, Age K

    2009-12-16

    Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples.

  17. Crossfit analysis: a novel method to characterize the dynamics of induced plant responses

    PubMed Central

    2009-01-01

    Background Many plant species show induced responses that protect them against exogenous attacks. These responses involve the production of many different bioactive compounds. Plant species belonging to the Brassicaceae family produce defensive glucosinolates, which may greatly influence their favorable nutritional properties for humans. Each responding compound may have its own dynamic profile and metabolic relationships with other compounds. The chemical background of the induced response is therefore highly complex and may therefore not reveal all the properties of the response in any single model. Results This study therefore aims to describe the dynamics of the glucosinolate response, measured at three time points after induction in a feral Brassica, by a three-faceted approach, based on Principal Component Analysis. First the large-scale aspects of the response are described in a 'global model' and then each time-point in the experiment is individually described in 'local models' that focus on phenomena that occur at specific moments in time. Although each local model describes the variation among the plants at one time-point as well as possible, the response dynamics are lost. Therefore a novel method called the 'Crossfit' is described that links the local models of different time-points to each other. Conclusions Each element of the described analysis approach reveals different aspects of the response. The crossfit shows that smaller dynamic changes may occur in the response that are overlooked by global models, as illustrated by the analysis of a metabolic profiling dataset of the same samples. PMID:20015363

  18. Predicate Argument Structure Analysis for Use Case Description Modeling

    NASA Astrophysics Data System (ADS)

    Takeuchi, Hironori; Nakamura, Taiga; Yamaguchi, Takahira

    In a large software system development project, many documents are prepared and updated frequently. In such a situation, support is needed for looking through these documents easily to identify inconsistencies and to maintain traceability. In this research, we focus on the requirements documents such as use cases and consider how to create models from the use case descriptions in unformatted text. In the model construction, we propose a few semantic constraints based on the features of the use cases and use them for a predicate argument structure analysis to assign semantic labels to actors and actions. With this approach, we show that we can assign semantic labels without enhancing any existing general lexical resources such as case frame dictionaries and design a less language-dependent model construction architecture. By using the constructed model, we consider a system for quality analysis of the use cases and automated test case generation to keep the traceability between document sets. We evaluated the reuse of the existing use cases and generated test case steps automatically with the proposed prototype system from real-world use cases in the development of a system using a packaged application. Based on the evaluation, we show how to construct models with high precision from English and Japanese use case data. Also, we could generate good test cases for about 90% of the real use cases through the manual improvement of the descriptions based on the feedback from the quality analysis system.

  19. Phase transitions and thermodynamic properties of antiferromagnetic Ising model with next-nearest-neighbor interactions on the Kagomé lattice

    NASA Astrophysics Data System (ADS)

    Ramazanov, M. K.; Murtazaev, A. K.; Magomedov, M. A.; Badiev, M. K.

    2018-06-01

    We study phase transitions and thermodynamic properties in the two-dimensional antiferromagnetic Ising model with next-nearest-neighbor interaction on a Kagomé lattice by Monte Carlo simulations. A histogram data analysis shows that a second-order transition occurs in the model. From the analysis of obtained data, we can assume that next-nearest-neighbor ferromagnetic interactions in two-dimensional antiferromagnetic Ising model on a Kagomé lattice excite the occurrence of a second-order transition and unusual behavior of thermodynamic properties on the temperature dependence.

  20. Bifurcation analysis of dengue transmission model in Baguio City, Philippines

    NASA Astrophysics Data System (ADS)

    Libatique, Criselda P.; Pajimola, Aprimelle Kris J.; Addawe, Joel M.

    2017-11-01

    In this study, we formulate a deterministic model for the transmission dynamics of dengue fever in Baguio City, Philippines. We analyzed the existence of the equilibria of the dengue model. We computed and obtained conditions for the existence of the equilibrium states. Stability analysis for the system is carried out for disease free equilibrium. We showed that the system becomes stable under certain conditions of the parameters. A particular parameter is taken and with the use of the Theory of Centre Manifold, the proposed model demonstrates a bifurcation phenomenon. We performed numerical simulation to verify the analytical results.

  1. Qualitative analysis of pure and adulterated canola oil via SIMCA

    NASA Astrophysics Data System (ADS)

    Basri, Katrul Nadia; Khir, Mohd Fared Abdul; Rani, Rozina Abdul; Sharif, Zaiton; Rusop, M.; Zoolfakar, Ahmad Sabirin

    2018-05-01

    This paper demonstrates the utilization of near infrared (NIR) spectroscopy to classify pure and adulterated sample of canola oil. Soft Independent Modeling Class Analogies (SIMCA) algorithm was implemented to discriminate the samples to its classes. Spectral data obtained was divided using Kennard Stone algorithm into training and validation dataset by a fixed ratio of 7:3. The model accuracy obtained based on the model built is 0.99 whereas the sensitivity and precision are 0.92 and 1.00. The result showed the classification model is robust to perform qualitative analysis of canola oil for future application.

  2. Project risk management in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Titarenko, Boris; Hasnaoui, Amir; Titarenko, Roman; Buzuk, Liliya

    2018-03-01

    This paper shows the project risk management methods, which allow to better identify risks in the construction of high-rise buildings and to manage them throughout the life cycle of the project. One of the project risk management processes is a quantitative analysis of risks. The quantitative analysis usually includes the assessment of the potential impact of project risks and their probabilities. This paper shows the most popular methods of risk probability assessment and tries to indicate the advantages of the robust approach over the traditional methods. Within the framework of the project risk management model a robust approach of P. Huber is applied and expanded for the tasks of regression analysis of project data. The suggested algorithms used to assess the parameters in statistical models allow to obtain reliable estimates. A review of the theoretical problems of the development of robust models built on the methodology of the minimax estimates was done and the algorithm for the situation of asymmetric "contamination" was developed.

  3. Links between physical fitness and cardiovascular reactivity and recovery to psychological stressors: A meta-analysis.

    PubMed

    Forcier, Kathleen; Stroud, Laura R; Papandonatos, George D; Hitsman, Brian; Reiches, Meredith; Krishnamoorthy, Jenelle; Niaura, Raymond

    2006-11-01

    A meta-analysis of published studies with adult human participants was conducted to evaluate whether physical fitness attenuates cardiovascular reactivity and improves recovery from acute psychological stressors. Thirty-three studies met selection criteria; 18 were included in recovery analyses. Effect sizes and moderator influences were calculated by using meta-analysis software. A fixed effects model was fit initially; however, between-studies heterogeneity could not be explained even after inclusion of moderators. Therefore, to account for residual heterogeneity, a random effects model was estimated. Under this model, fit individuals showed significantly attenuated heart rate and systolic blood pressure reactivity and a trend toward attenuated diastolic blood pressure reactivity. Fit individuals also showed faster heart rate recovery, but there were no significant differences in systolic blood pressure or diastolic blood pressure recovery. No significant moderators emerged. Results have important implications for elucidating mechanisms underlying effects of fitness on cardiovascular disease and suggest that fitness may be an important confound in studies of stress reactivity. Copyright 2006 APA, all rights reserved.

  4. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models

    PubMed Central

    Chen, Han; Wang, Chaolong; Conomos, Matthew P.; Stilp, Adrienne M.; Li, Zilin; Sofer, Tamar; Szpiro, Adam A.; Chen, Wei; Brehm, John M.; Celedón, Juan C.; Redline, Susan; Papanicolaou, George J.; Thornton, Timothy A.; Laurie, Cathy C.; Rice, Kenneth; Lin, Xihong

    2016-01-01

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM’s constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. PMID:27018471

  5. Operational oil spill trajectory modelling using HF radar currents: A northwest European continental shelf case study.

    PubMed

    Abascal, Ana J; Sanchez, Jorge; Chiri, Helios; Ferrer, María I; Cárdenas, Mar; Gallego, Alejandro; Castanedo, Sonia; Medina, Raúl; Alonso-Martirena, Andrés; Berx, Barbara; Turrell, William R; Hughes, Sarah L

    2017-06-15

    This paper presents a novel operational oil spill modelling system based on HF radar currents, implemented in a northwest European shelf sea. The system integrates Open Modal Analysis (OMA), Short Term Prediction algorithms (STPS) and an oil spill model to simulate oil spill trajectories. A set of 18 buoys was used to assess the accuracy of the system for trajectory forecast and to evaluate the benefits of HF radar data compared to the use of currents from a hydrodynamic model (HDM). The results showed that simulated trajectories using OMA currents were more accurate than those obtained using a HDM. After 48h the mean error was reduced by 40%. The forecast skill of the STPS method was valid up to 6h ahead. The analysis performed shows the benefits of HF radar data for operational oil spill modelling, which could be easily implemented in other regions with HF radar coverage. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  6. Mid-tropospheric Spectral Length-scale Analysis of Many Constituents from Aircraft, Satellite and Model Results During the 2013 SENEX Field Study.

    NASA Astrophysics Data System (ADS)

    McKeen, S. A.; Angevine, W. M.; Ahmadov, R.; Frost, G. J.; Kim, S. W.; Cui, Y.; McDonald, B.; Trainer, M.; Holloway, J. S.; Ryerson, T. B.; Peischl, J.; Gambacorta, A.; Barnet, C. D.; Smith, N.; Pierce, R. B.

    2016-12-01

    This study presents preliminary comparisons of satellite, aircraft, and model variance spectra for meteorological, thermodynamic and gas-phase species collected during the 2013 Southeastern Nexus Air Quality Experiment (SENEX). Fourier analysis of 8 constituents collected at 1 Hz by the NOAA W-P3 aircraft in the 25 to 200 km length-scale range exhibit properties consistent with previous scale dependence studies: when spectra are averaged over several 500 mb flight legs, very linear dependence is found on log-log plots of spectral density versus inverse length-scale. Derived slopes for wind speed, temperature, H2O, CO, CO2, CH4, NOy and O3 all fall within ±30% and close to the slope of -5/3 predicted from dimensional scaling theory of isotropic turbulence. Qualitative differences are seen when a similar analysis, without quality control, is applied to a preliminary set of NUCAPS satellite retrievals over the continental U.S. during SENEX. While 500mb water vapor and column integrated water show slopes close to the -5/3 value in the 200 to 1000 km length-scale range, other quantities show significantly shallower slopes, suggesting the need for rigorous quality control. Results from WRF-Chem regional air quality model simulations at 500mb show the model is unable to account for variance on length-scales less than 6ΔX, where ΔX is the model horizontal resolution (12km). Comparisons with satellite data in the 200 to 1000km range show slopes consistent with the -5/3 power law for species such as CO, CH4 and CO2 that do not undergo reinitialization, suggesting potential for future application.

  7. Population viability analysis: using a modeling tool to assess the viability of tapir populations in fragmented landscapes.

    PubMed

    Medici, Emília Patrícia; Desbiez, Arnaud Leonard Jean

    2012-12-01

    A population viability analysis (PVA) was conducted of the lowland tapir populations in the Atlantic Forest of the Pontal do Paranapanema region, Brazil, including Morro do Diabo State Park (MDSP) and surrounding forest fragments. Results from the model projected that the population of 126 tapirs in MDSP is likely to persist over the next 100 years; however, 200 tapirs would be required to maintain a viable population. Sensitivity analysis showed that sub-adult mortality and adult mortality have the strongest influence on the dynamics of lowland tapir populations. High road-kill has a major impact on the MDSP tapir population and can lead to population extinction. Metapopulation modeling showed that dispersal of tapirs from MDSP to the surrounding fragments can be detrimental to the overall metapopulation, as fragments act as sinks. Nevertheless, the model showed that under certain conditions the maintenance of the metapopulation dynamics might be determinant for the persistence of tapirs in the region, particularly in the smaller fragments. The establishment of corridors connecting MDSP to the forest fragments models resulted in an increase in the stochastic growth rate, making tapirs more resilient to threats and catastrophes, but only if rates of mortality were not increased when using corridors. The PVA showed that the conservation of tapirs in the Pontal region depends on: the effective protection of MDSP; maintenance and, whenever possible, enhancement of the functional connectivity of the landscape, reducing mortality during dispersal and threats in the unprotected forest fragments; and neutralization of all threats affecting tapirs in the smaller forest fragments. © 2012 Wiley Publishing Asia Pty Ltd, ISZS and IOZ/CAS.

  8. Stormwater quality modelling in combined sewers: calibration and uncertainty analysis.

    PubMed

    Kanso, A; Chebbo, G; Tassin, B

    2005-01-01

    Estimating the level of uncertainty in urban stormwater quality models is vital for their utilization. This paper presents the results of application of a Monte Carlo Markov Chain method based on the Bayesian theory for the calibration and uncertainty analysis of a storm water quality model commonly used in available software. The tested model uses a hydrologic/hydrodynamic scheme to estimate the accumulation, the erosion and the transport of pollutants on surfaces and in sewers. It was calibrated for four different initial conditions of in-sewer deposits. Calibration results showed large variability in the model's responses in function of the initial conditions. They demonstrated that the model's predictive capacity is very low.

  9. Potential of mean force analysis of the self-association of leucine-rich transmembrane α-helices: difference between atomistic and coarse-grained simulations.

    PubMed

    Nishizawa, Manami; Nishizawa, Kazuhisa

    2014-08-21

    Interaction of transmembrane (TM) proteins is important in many biological processes. Large-scale computational studies using coarse-grained (CG) simulations are becoming popular. However, most CG model parameters have not fully been calibrated with respect to lateral interactions of TM peptide segments. Here, we compare the potential of mean forces (PMFs) of dimerization of TM helices obtained using a MARTINI CG model and an atomistic (AT) Berger lipids-OPLS/AA model (AT(OPLS)). For helical, tryptophan-flanked, leucine-rich peptides (WL15 and WALP15) embedded in a parallel configuration in an octane slab, the AT(OPLS) PMF profiles showed a shallow minimum (with a depth of approximately 3 kJ/mol; i.e., a weak tendency to dimerize). A similar analysis using the CHARMM36 all-atom model (AT(CHARMM)) showed comparable results. In contrast, the CG analysis generally showed steep PMF curves with depths of approximately 16-22 kJ/mol, suggesting a stronger tendency to dimerize compared to the AT model. This CG > AT discrepancy in the propensity for dimerization was also seen for dilauroylphosphatidylcholine (DLPC)-embedded peptides. For a WL15 (and WALP15)/DLPC bilayer system, AT(OPLS) PMF showed a repulsive mean force for a wide range of interhelical distances, in contrast to the attractive forces observed in the octane system. The change from the octane slab to the DLPC bilayer also mitigated the dimerization propensity in the CG system. The dimerization energies of CG (AALALAA)3 peptides in DLPC and dioleoylphosphatidylcholine bilayers were in good agreement with previous experimental data. The lipid headgroup, but not the length of the lipid tails, was a key causative factor contributing to the differences between octane and DLPC. Furthermore, the CG model, but not the AT model, showed high sensitivity to changes in amino acid residues located near the lipid-water interface and hydrophobic mismatch between the peptides and membrane. These findings may help interpret CG and AT simulation results on membrane proteins.

  10. Evaluating the predictive accuracy and the clinical benefit of a nomogram aimed to predict survival in node-positive prostate cancer patients: External validation on a multi-institutional database.

    PubMed

    Bianchi, Lorenzo; Schiavina, Riccardo; Borghesi, Marco; Bianchi, Federico Mineo; Briganti, Alberto; Carini, Marco; Terrone, Carlo; Mottrie, Alex; Gacci, Mauro; Gontero, Paolo; Imbimbo, Ciro; Marchioro, Giansilvio; Milanese, Giulio; Mirone, Vincenzo; Montorsi, Francesco; Morgia, Giuseppe; Novara, Giacomo; Porreca, Angelo; Volpe, Alessandro; Brunocilla, Eugenio

    2018-04-06

    To assess the predictive accuracy and the clinical value of a recent nomogram predicting cancer-specific mortality-free survival after surgery in pN1 prostate cancer patients through an external validation. We evaluated 518 prostate cancer patients treated with radical prostatectomy and pelvic lymph node dissection with evidence of nodal metastases at final pathology, at 10 tertiary centers. External validation was carried out using regression coefficients of the previously published nomogram. The performance characteristics of the model were assessed by quantifying predictive accuracy, according to the area under the curve in the receiver operating characteristic curve and model calibration. Furthermore, we systematically analyzed the specificity, sensitivity, positive predictive value and negative predictive value for each nomogram-derived probability cut-off. Finally, we implemented decision curve analysis, in order to quantify the nomogram's clinical value in routine practice. External validation showed inferior predictive accuracy as referred to in the internal validation (65.8% vs 83.3%, respectively). The discrimination (area under the curve) of the multivariable model was 66.7% (95% CI 60.1-73.0%) by testing with receiver operating characteristic curve analysis. The calibration plot showed an overestimation throughout the range of predicted cancer-specific mortality-free survival rates probabilities. However, in decision curve analysis, the nomogram's use showed a net benefit when compared with the scenarios of treating all patients or none. In an external setting, the nomogram showed inferior predictive accuracy and suboptimal calibration characteristics as compared to that reported in the original population. However, decision curve analysis showed a clinical net benefit, suggesting a clinical implication to correctly manage pN1 prostate cancer patients after surgery. © 2018 The Japanese Urological Association.

  11. An operational wave forecasting system for the east coast of India

    NASA Astrophysics Data System (ADS)

    Sandhya, K. G.; Murty, P. L. N.; Deshmukh, Aditya N.; Balakrishnan Nair, T. M.; Shenoi, S. S. C.

    2018-03-01

    Demand for operational ocean state forecasting is increasing, owing to the ever-increasing marine activities in the context of blue economy. In the present study, an operational wave forecasting system for the east coast of India is proposed using unstructured Simulating WAves Nearshore model (UNSWAN). This modelling system uses very high resolution mesh near the Indian east coast and coarse resolution offshore, and thus avoids the necessity of nesting with a global wave model. The model is forced with European Centre for Medium-Range Weather Forecasts (ECMWF) winds and simulates wave parameters and wave spectra for the next 3 days. The spatial pictures of satellite data overlaid on simulated wave height show that the model is capable of simulating the significant wave heights and their gradients realistically. Spectral validation has been done using the available data to prove the reliability of the model. To further evaluate the model performance, the wave forecast for the entire year 2014 is evaluated against buoy measurements over the region at 4 waverider buoy locations. Seasonal analysis of significant wave height (Hs) at the four locations showed that the correlation between the modelled and observed was the highest (in the range 0.78-0.96) during the post-monsoon season. The variability of Hs was also the highest during this season at all locations. The error statistics showed clear seasonal and geographical location dependence. The root mean square error at Visakhapatnam was the same (0.25) for all seasons, but it was the smallest for pre-monsoon season (0.12 m and 0.17 m) for Puducherry and Gopalpur. The wind sea component showed higher variability compared to the corresponding swell component in all locations and for all seasons. The variability was picked by the model to a reasonable level in most of the cases. The results of statistical analysis show that the modelling system is suitable for use in the operational scenario.

  12. Modelling the association of dengue fever cases with temperature and relative humidity in Jeddah, Saudi Arabia-A generalised linear model with break-point analysis.

    PubMed

    Alkhaldy, Ibrahim

    2017-04-01

    The aim of this study was to examine the role of environmental factors in the temporal distribution of dengue fever in Jeddah, Saudi Arabia. The relationship between dengue fever cases and climatic factors such as relative humidity and temperature was investigated during 2006-2009 to determine whether there is any relationship between dengue fever cases and climatic parameters in Jeddah City, Saudi Arabia. A generalised linear model (GLM) with a break-point was used to determine how different levels of temperature and relative humidity affected the distribution of the number of cases of dengue fever. Break-point analysis was performed to modelled the effect before and after a break-point (change point) in the explanatory parameters under various scenarios. Akaike information criterion (AIC) and cross validation (CV) were used to assess the performance of the models. The results showed that maximum temperature and mean relative humidity are most probably the better predictors of the number of dengue fever cases in Jeddah. In this study three scenarios were modelled: no time lag, 1-week lag and 2-weeks lag. Among these scenarios, the 1-week lag model using mean relative humidity as an explanatory variable showed better performance. This study showed a clear relationship between the meteorological variables and the number of dengue fever cases in Jeddah. The results also demonstrated that meteorological variables can be successfully used to estimate the number of dengue fever cases for a given period of time. Break-point analysis provides further insight into the association between meteorological parameters and dengue fever cases by dividing the meteorological parameters into certain break-points. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Constitutive modeling of the passive inflation-extension behavior of the swine colon.

    PubMed

    Patel, Bhavesh; Chen, Huan; Ahuja, Aashish; Krieger, Joshua F; Noblet, Jillian; Chambers, Sean; Kassab, Ghassan S

    2018-01-01

    In the present work, we propose the first structural constitutive model of the passive mechanical behavior of the swine colon that is validated against physiological inflation-extension tests, and accounts for residual strains. Sections from the spiral colon and the descending colon were considered to investigate potential regional variability. We found that the proposed constitutive model accurately captures the passive inflation-extension behavior of both regions of the swine colon (coefficient of determination R 2 =0.94±0.02). The model revealed that the circumferential muscle layer does not provide significant mechanical support under passive conditions and the circumferential load is actually carried by the submucosa layer. The stress analysis permitted by the model showed that the colon tissue can distend up to 30% radially without significant increase in the wall stresses suggesting a highly compliant behavior of the tissue. This is in-line with the requirement for the tissue to easily accommodate variable quantities of fecal matter. The analysis also showed that the descending colon is significantly more compliant than the spiral colon, which is relevant to the storage function of the descending colon. Histological analysis showed that the swine colon possesses a four-layer structure similar to the human colon, where the longitudinal muscle layer is organized into bands called taeniae, a typical feature of the human colon. The model and the estimated parameters can be used in a Finite Element framework to conduct simulations with realistic geometry of the swine colon. The resulting computational model will provide a foundation for virtual assessment of safe and effective devices for the treatment of colonic diseases. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism and susceptibility to vitiligo: Systematic review and meta-analysis.

    PubMed

    Agarwal, Silky; Changotra, Harish

    2017-01-01

    Protein tyrosine phosphatase, non-receptor type 22 gene, which translates to lymphoid tyrosine phosphatase, is considered to be a susceptibility gene marker associated with several autoimmune diseases. Several studies have demonstrated the association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism with vitiligo. However, these studies showed conflicting results. Meta-analysis of the same was conducted earlier that included fewer number of publications in their study. We performed a meta-analysis of a total of seven studies consisting of 2094 cases and 3613 controls to evaluate the possible association of protein tyrosine phosphatase, non-receptor type 22 +1858C>T polymorphism with vitiligo susceptibility. We conducted a literature search in PubMed, Google Scholar and Dogpile for all published paper on protein tyrosine phosphatase, non-receptor type 22 +1858C→T polymorphism and vitiligo risk till June 2016. Data analysis was performed by RevMan 5.3 and comprehensive meta-analysis v3.0 software. Meta-analysis showed an overall significant association of protein tyrosine phosphatase, non- receptor type 22 +1858C→T polymorphism with vitiligo in all models (allelic model [T vs. C]: odds ratio = 1.50, 95% confidence interval [1.32-1.71], P< 0.001; dominant model [TT + CT vs. CC]: odds ratio = 1.61, 95% confidence interval [1.16-2.24], P = 0.004; recessive model [TT vs. CT + CC]: odds ratio = 4.82, 95% confidence interval [1.11-20.92], P = 0.04; homozygous model [TT vs. CC]: odds ratio = 5.34, 95% confidence interval [1.23-23.24], P = 0.03; co-dominant model [CT vs. CC]: odds ratio = 1.52, 95% confidence interval [1.09-2.13], P = 0.01). No publication bias was detected in the funnel plot study. Limited ethnic-based studies, unable to satisfy data by gender or vitiligo-type are some limitations of the present meta-analysis. Stratifying data by ethnicity showed an association of protein tyrosine phosphatase, non-receptor type 22 +1858C→T with vitiligo in European population (odds ratio = 1.53, 95% confidence interval [1.34-1.75], P< 0.001) but not in Asian population (odds ratio = 0.59, 95% confidence interval [0.26-1.32], P = 0.2). In conclusion, protein tyrosine phosphatase, non-receptor type 22 +1858 T allele predisposes European individuals to vitiligo.

  15. Tertiary structure-based analysis of microRNA–target interactions

    PubMed Central

    Gan, Hin Hark; Gunsalus, Kristin C.

    2013-01-01

    Current computational analysis of microRNA interactions is based largely on primary and secondary structure analysis. Computationally efficient tertiary structure-based methods are needed to enable more realistic modeling of the molecular interactions underlying miRNA-mediated translational repression. We incorporate algorithms for predicting duplex RNA structures, ionic strength effects, duplex entropy and free energy, and docking of duplex–Argonaute protein complexes into a pipeline to model and predict miRNA–target duplex binding energies. To ensure modeling accuracy and computational efficiency, we use an all-atom description of RNA and a continuum description of ionic interactions using the Poisson–Boltzmann equation. Our method predicts the conformations of two constructs of Caenorhabditis elegans let-7 miRNA–target duplexes to an accuracy of ∼3.8 Å root mean square distance of their NMR structures. We also show that the computed duplex formation enthalpies, entropies, and free energies for eight miRNA–target duplexes agree with titration calorimetry data. Analysis of duplex–Argonaute docking shows that structural distortions arising from single-base-pair mismatches in the seed region influence the activity of the complex by destabilizing both duplex hybridization and its association with Argonaute. Collectively, these results demonstrate that tertiary structure-based modeling of miRNA interactions can reveal structural mechanisms not accessible with current secondary structure-based methods. PMID:23417009

  16. Model independent inference of the expansion history and implications for the growth of structure

    NASA Astrophysics Data System (ADS)

    Joudaki, Shahab; Kaplinghat, Manoj; Keeley, Ryan; Kirkby, David

    2018-06-01

    We model the expansion history of the Universe as a Gaussian process and find constraints on the dark energy density and its low-redshift evolution using distances inferred from the Luminous Red Galaxy and Lyman-alpha data sets of the Baryon Oscillation Spectroscopic Survey, supernova data from the Joint Light-Curve Analysis sample, cosmic microwave background data from the Planck satellite, and local measurement of the Hubble parameter from the Hubble Space Telescope (H 0 ). Our analysis shows that the cosmic microwave background, Luminous Red Galaxy, Lyman-alpha, and Joint Light-Curve Analysis data are consistent with each other and with a Λ CDM cosmology, but the H 0 data are inconsistent at moderate significance. Including the presence of dark radiation does not alleviate the H 0 tension in our analysis. While some of these results have been noted previously, the strength here lies in that we do not assume a particular cosmological model. We calculate the growth of the gravitational potential in General Relativity corresponding to these general expansion histories and show that they are well approximated by Ωm0.55 given the current precision. We assess the prospects for upcoming surveys to measure deviations from Λ CDM using this model-independent approach.

  17. Thermal Tides in the Martian Middle Atmosphere as Seen by the Mars Climate Sounder

    PubMed Central

    Lee, C.; Lawson, W. G.; Richardson, M. I.; Heavens, N. G.; Kleinböhl, A.; Banfield, D.; McCleese, D. J.; Zurek, R.; Kass, D.; Schofield, J. T.; Leovy, C. B.; Taylor, F. W.; Toigo, A. D.

    2016-01-01

    The first systematic observations of the middle atmosphere of Mars (35km–80km) with the Mars Climate Sounder (MCS) show dramatic patterns of diurnal thermal variation, evident in retrievals of temperature and water ice opacity. At the time of writing, the dataset of MCS limb retrievals is sufficient for spectral analysis within a limited range of latitudes and seasons. This analysis shows that these thermal variations are almost exclusively associated with a diurnal thermal tide. Using a Martian General Circulation Model to extend our analysis we show that the diurnal thermal tide dominates these patterns for all latitudes and all seasons. PMID:27630378

  18. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  19. Incorporating inductances in tissue-scale models of cardiac electrophysiology

    NASA Astrophysics Data System (ADS)

    Rossi, Simone; Griffith, Boyce E.

    2017-09-01

    In standard models of cardiac electrophysiology, including the bidomain and monodomain models, local perturbations can propagate at infinite speed. We address this unrealistic property by developing a hyperbolic bidomain model that is based on a generalization of Ohm's law with a Cattaneo-type model for the fluxes. Further, we obtain a hyperbolic monodomain model in the case that the intracellular and extracellular conductivity tensors have the same anisotropy ratio. In one spatial dimension, the hyperbolic monodomain model is equivalent to a cable model that includes axial inductances, and the relaxation times of the Cattaneo fluxes are strictly related to these inductances. A purely linear analysis shows that the inductances are negligible, but models of cardiac electrophysiology are highly nonlinear, and linear predictions may not capture the fully nonlinear dynamics. In fact, contrary to the linear analysis, we show that for simple nonlinear ionic models, an increase in conduction velocity is obtained for small and moderate values of the relaxation time. A similar behavior is also demonstrated with biophysically detailed ionic models. Using the Fenton-Karma model along with a low-order finite element spatial discretization, we numerically analyze differences between the standard monodomain model and the hyperbolic monodomain model. In a simple benchmark test, we show that the propagation of the action potential is strongly influenced by the alignment of the fibers with respect to the mesh in both the parabolic and hyperbolic models when using relatively coarse spatial discretizations. Accurate predictions of the conduction velocity require computational mesh spacings on the order of a single cardiac cell. We also compare the two formulations in the case of spiral break up and atrial fibrillation in an anatomically detailed model of the left atrium, and we examine the effect of intracellular and extracellular inductances on the virtual electrode phenomenon.

  20. Sonographically guided intrasheath percutaneous release of the first annular pulley for trigger digits, part 2: randomized comparative study of the economic impact of 3 surgical models.

    PubMed

    Rojo-Manaute, Jose Manuel; Capa-Grasa, Alberto; Del Cerro-Gutiérrez, Miguel; Martínez, Manuel Villanueva; Chana-Rodríguez, Francisco; Martín, Javier Vaquero

    2012-03-01

    Trigger digit surgery can be performed by an open approach using classic open surgery, by a wide-awake approach, or by sonographically guided first annular pulley release in day surgery and office-based ambulatory settings. Our goal was to perform a turnover and economic analysis of 3 surgical models. Two studies were conducted. The first was a turnover analysis of 57 patients allocated 4:4:1 into the surgical models: sonographically guided-office-based, classic open-day surgery, and wide-awake-office-based. Regression analysis for the turnover time was monitored for assessing stability (R(2) < .26). Second, on the basis of turnover times and hospital tariff revenues, we calculated the total costs, income to cost ratio, opportunity cost, true cost, true net income (primary variable), break-even points for sonographically guided fixed costs, and 1-way analysis for identifying thresholds among alternatives. Thirteen sonographically guided-office-based patients were withdrawn because of a learning curve influence. The wide-awake (n = 6) and classic (n = 26) models were compared to the last 25% of the sonographically guided group (n = 12), which showed significantly less mean turnover times, income to cost ratios 2.52 and 10.9 times larger, and true costs 75.48 and 20.92 times lower, respectively. A true net income break-even point happened after 19.78 sonographically guided-office-based procedures. Sensitivity analysis showed a threshold between wide-awake and last 25% sonographically guided true costs if the last 25% sonographically guided turnover times reached 65.23 and 27.81 minutes, respectively. However, this trial was underpowered. This trial comparing surgical models was underpowered and is inconclusive on turnover times; however, the sonographically guided-office-based approach showed shorter turnover times and better economic results with a quick recoup of the costs of sonographically assisted surgery.

  1. Performance analysis: a study using data envelopment analysis in 26 Brazilian hospitals.

    PubMed

    Guerra, Mariana; de Souza, Antônio Artur; Moreira, Douglas Rafael

    2012-01-01

    This article describes a proposal for analyzing the performance of public Brazilian hospitals using financial and non-financial rates (i.e., operational rates), and thereby highlights the effectiveness (or otherwise) of the financial management of organizations in this study. A total of 72 hospitals in the Brazilian Unified Health Care System (in Portuguese, Sistema Unico de Saúde-SUS), were selected for accessibility and completeness of their data. Twenty-six organizations were used for the study sample, consisting of entities that had publicly disclosed financial statements for the period from 2008 (in particular, via the Internet) and whose operational data could be found in the SUS database. Our proposal, based on models using the method of Data Envelopment Analysis (DEA), was the construction of six initial models that were later compiled into a standard model. The relations between the rates that comprised the models were based on the variables and the notes of: Schuhmann, McCue and Nayar, Barnum and Kutzin, Younis, Younies, and Okojie, Marinho, Moreno, and Cavalini, and Ersoy, Kavuncubasi, Ozcan, and Harris II. We put forward an enhanced grant proposal applicable to Brazil aiming to (i) confirm or refute the rates that show the effectiveness or ineffectiveness of financial management of national hospitals; and (ii) determine the best performances, which could be used as a reference for future studies. Obtained results: (i) for all financial indicators considered, only one showed no significance in all models; and (ii) for operational indicators, the results were not relevant when the number of occupied beds was considered. Though the analysis was related to only services provided by SUS, we conclude that our study has great potential for analyzing the financial management performance of Brazilian hospitals in general, for the following reasons: (i) it shows the relationship of financial and operational rates that can be used to analyze the performance of these organizations; and (ii) it introduces ranges of these values that can be used as standard for the analysis of Brazilian hospitals.

  2. Review: To be or not to be an identifiable model. Is this a relevant question in animal science modelling?

    PubMed

    Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P

    2018-04-01

    What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.

  3. A Petri Net Approach Based Elementary Siphons Supervisor for Flexible Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Abdul-Hussin, Mowafak Hassan

    2015-05-01

    This paper presents an approach to constructing a class of an S3PR net for modeling, simulation and control of processes occurring in the flexible manufacturing system (FMS) used based elementary siphons of a Petri net. Siphons are very important to the analysis and control of deadlocks of FMS that is significant objectives of siphons. Petri net models in the efficiency structure analysis, and utilization of the FMSs when different policy can be implemented lead to the deadlock prevention. We are representing an effective deadlock-free policy of a special class of Petri nets called S3PR. Simulation of Petri net structural analysis and reachability graph analysis is used for analysis and control of Petri nets. Petri nets contain been successfully as one of the most powerful tools for modelling of FMS, where Using structural analysis, we show that liveness of such systems can be attributed to the absence of under marked siphons.

  4. Analysis of the psychometric properties of the American Orthopaedic Foot and Ankle Society Score (AOFAS) in rheumatoid arthritis patients: application of the Rasch model.

    PubMed

    Conceição, Cristiano Sena da; Neto, Mansueto Gomes; Neto, Anolino Costa; Mendes, Selena M D; Baptista, Abrahão Fontes; Sá, Kátia Nunes

    2016-01-01

    To tested the reliability and validity of Aofas in a sample of rheumatoid arthritis patients. The scale was applicable to rheumatoid arthritis patients, twice by the interviewer 1 and once by the interviewer 2. The Aofas was subjected to test-retest reliability analysis (with 20 Rheumatoid arthritis subjects). The psychometric properties were investigated using Rasch analysis on 33 Rheumatoid arthritis patients. Intra-Class Correlation Coefficient (ICC) were (0.90

  5. Using Set Covering with Item Sampling to Analyze the Infeasibility of Linear Programming Test Assembly Models

    ERIC Educational Resources Information Center

    Huitzing, Hiddo A.

    2004-01-01

    This article shows how set covering with item sampling (SCIS) methods can be used in the analysis and preanalysis of linear programming models for test assembly (LPTA). LPTA models can construct tests, fulfilling a set of constraints set by the test assembler. Sometimes, no solution to the LPTA model exists. The model is then said to be…

  6. Combat Simulation Using Breach Computer Language

    DTIC Science & Technology

    1979-09-01

    simulation and weapon system analysis computer language Two types of models were constructed: a stochastic duel and a dynamic engagement model The... duel model validates the BREACH approach by comparing results with mathematical solutions. The dynamic model shows the capability of the BREACH...BREACH 2 Background 2 The Language 3 Static Duel 4 Background and Methodology 4 Validation 5 Results 8 Tank Duel Simulation 8 Dynamic Assault Model

  7. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  8. Binding free energy analysis of protein-protein docking model structures by evERdock.

    PubMed

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-14

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  9. Binding free energy analysis of protein-protein docking model structures by evERdock

    NASA Astrophysics Data System (ADS)

    Takemura, Kazuhiro; Matubayasi, Nobuyuki; Kitao, Akio

    2018-03-01

    To aid the evaluation of protein-protein complex model structures generated by protein docking prediction (decoys), we previously developed a method to calculate the binding free energies for complexes. The method combines a short (2 ns) all-atom molecular dynamics simulation with explicit solvent and solution theory in the energy representation (ER). We showed that this method successfully selected structures similar to the native complex structure (near-native decoys) as the lowest binding free energy structures. In our current work, we applied this method (evERdock) to 100 or 300 model structures of four protein-protein complexes. The crystal structures and the near-native decoys showed the lowest binding free energy of all the examined structures, indicating that evERdock can successfully evaluate decoys. Several decoys that show low interface root-mean-square distance but relatively high binding free energy were also identified. Analysis of the fraction of native contacts, hydrogen bonds, and salt bridges at the protein-protein interface indicated that these decoys were insufficiently optimized at the interface. After optimizing the interactions around the interface by including interfacial water molecules, the binding free energies of these decoys were improved. We also investigated the effect of solute entropy on binding free energy and found that consideration of the entropy term does not necessarily improve the evaluations of decoys using the normal model analysis for entropy calculation.

  10. Growth, characterization and estimation of lattice strain and size in CdS nanoparticles: X-ray peak profile analysis

    NASA Astrophysics Data System (ADS)

    Solanki, Rekha Garg; Rajaram, Poolla; Bajpai, P. K.

    2018-05-01

    This work is based on the growth, characterization and estimation of lattice strain and crystallite size in CdS nanoparticles by X-ray peak profile analysis. The CdS nanoparticles were synthesized by a non-aqueous solvothermal method and were characterized by powder X-ray diffraction (XRD), transmission electron microscopy (TEM), Raman and UV-visible spectroscopy. XRD confirms that the CdS nanoparticles have the hexagonal structure. The Williamson-Hall (W-H) method was used to study the X-ray peak profile analysis. The strain-size plot (SSP) was used to study the individual contributions of crystallite size and lattice strain from the X-rays peaks. The physical parameters such as strain, stress and energy density values were calculated using various models namely, isotropic strain model, anisotropic strain model and uniform deformation energy density model. The particle size was estimated from the TEM images to be in the range of 20-40 nm. The Raman spectrum shows the characteristic optical 1LO and 2LO vibrational modes of CdS. UV-visible absorption studies show that the band gap of the CdS nanoparticles is 2.48 eV. The results show that the crystallite size estimated from Scherrer's formula, W-H plots, SSP and the particle size calculated by TEM images are approximately similar.

  11. Validation of cone beam computed tomography-based tooth printing using different three-dimensional printing technologies.

    PubMed

    Khalil, Wael; EzEldeen, Mostafa; Van De Casteele, Elke; Shaheen, Eman; Sun, Yi; Shahbazian, Maryam; Olszewski, Raphael; Politis, Constantinus; Jacobs, Reinhilde

    2016-03-01

    Our aim was to determine the accuracy of 3-dimensional reconstructed models of teeth compared with the natural teeth by using 4 different 3-dimensional printers. This in vitro study was carried out using 2 intact, dry adult human mandibles, which were scanned with cone beam computed tomography. Premolars were selected for this study. Dimensional differences between natural teeth and the printed models were evaluated directly by using volumetric differences and indirectly through optical scanning. Analysis of variance, Pearson correlation, and Bland Altman plots were applied for statistical analysis. Volumetric measurements from natural teeth and fabricated models, either by the direct method (the Archimedes principle) or by the indirect method (optical scanning), showed no statistical differences. The mean volume difference ranged between 3.1 mm(3) (0.7%) and 4.4 mm(3) (1.9%) for the direct measurement, and between -1.3 mm(3) (-0.6%) and 11.9 mm(3) (+5.9%) for the optical scan. A surface part comparison analysis showed that 90% of the values revealed a distance deviation within the interval 0 to 0.25 mm. Current results showed a high accuracy of all printed models of teeth compared with natural teeth. This outcome opens perspectives for clinical use of cost-effective 3-dimensional printed teeth for surgical procedures, such as tooth autotransplantation. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Spatial pattern evolution of Aedes aegypti breeding sites in an Argentinean city without a dengue vector control programme.

    PubMed

    Espinosa, Manuel O; Polop, Francisco; Rotela, Camilo H; Abril, Marcelo; Scavuzzo, Carlos M

    2016-11-21

    The main objective of this study was to obtain and analyse the space-time dynamics of Aedes aegypti breeding sites in Clorinda City, Formosa Province, Argentina coupled with landscape analysis using the maximum entropy approach in order to generate a dengue vector niche model. In urban areas, without vector control activities, 12 entomologic (larval) samplings were performed during three years (October 2011 to October 2014). The entomologic surveillance area represented 16,511 houses. Predictive models for Aedes distribution were developed using vector breeding abundance data, density analysis, clustering and geoprocessing techniques coupled with Earth observation satellite data. The spatial analysis showed a vector spatial distribution pattern with clusters of high density in the central region of Clorinda with a well-defined high-risk area in the western part of the city. It also showed a differential temporal behaviour among different areas, which could have implications for risk models and control strategies at the urban scale. The niche model obtained for Ae. aegypti, based on only one year of field data, showed that 85.8% of the distribution of breeding sites is explained by the percentage of water supply (48.2%), urban distribution (33.2%), and the percentage of urban coverage (4.4%). The consequences for the development of control strategies are discussed with reference to the results obtained using distribution maps based on environmental variables.

  13. Development of esMOCA Biomechanic, Motion Capture Instrumentation for Biomechanics Analysis

    NASA Astrophysics Data System (ADS)

    Arendra, A.; Akhmad, S.

    2018-01-01

    This study aims to build motion capture instruments using inertial measurement unit sensors to assist in the analysis of biomechanics. Sensors used are accelerometer and gyroscope. Estimation of orientation sensors is done by digital motion processing in each sensor nodes. There are nine sensor nodes attached to the upper limbs. This sensor is connected to the pc via a wireless sensor network. The development of kinematics and inverse dynamamic models of the upper limb is done in simulink simmechanic. The kinematic model receives streaming data of sensor nodes mounted on the limbs. The output of the kinematic model is the pose of each limbs and visualized on display. The dynamic inverse model outputs the reaction force and reaction moment of each joint based on the limb motion input. Model validation in simulink with mathematical model of mechanical analysis showed results that did not differ significantly

  14. Design, analysis and verification of a knee joint oncological prosthesis finite element model.

    PubMed

    Zach, Lukáš; Kunčická, Lenka; Růžička, Pavel; Kocich, Radim

    2014-11-01

    The aim of this paper was to design a finite element model for a hinged PROSPON oncological knee endoprosthesis and to verify the model by comparison with ankle flexion angle using knee-bending experimental data obtained previously. Visible Human Project CT scans were used to create a general lower extremity bones model and to compose a 3D CAD knee joint model to which muscles and ligaments were added. Into the assembly the designed finite element PROSPON prosthesis model was integrated and an analysis focused on the PEEK-OPTIMA hinge pin bushing stress state was carried out. To confirm the stress state analysis results, contact pressure was investigated. The analysis was performed in the knee-bending position within 15.4-69.4° hip joint flexion range. The results showed that the maximum stress achieved during the analysis (46.6 MPa) did not exceed the yield strength of the material (90 MPa); the condition of plastic stability was therefore met. The stress state analysis results were confirmed by the distribution of contact pressure during knee-bending. The applicability of our designed finite element model for the real implant behaviour prediction was proven on the basis of good correlation of the analytical and experimental ankle flexion angle data. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.

    PubMed

    Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S

    2016-03-01

    A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.

  16. A Front-End Analysis Of Rear-End Crashes

    DOT National Transportation Integrated Search

    1992-05-17

    THIS PAPER DESCRIBES THE APPLICATION OF A SEVEN-STEP CRASH PROBLEM ANALYSIS METHODOLOGY, AS DESCRIBED IN THE PRECEDING PAPER BY LEASURE (1), TO REAR-END CRASHES. THE PAPER SHOWS HOW MODELING OF REAR-END CRASH SCENARIOS AND CANDIDATE COUNTERMEASURE AC...

  17. Chromaticity effects on head-tail instabilities for broadband impedance using two particle model, Vlasov analysis, and simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.

    Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less

  18. Chromaticity effects on head-tail instabilities for broadband impedance using two particle model, Vlasov analysis, and simulations

    DOE PAGES

    Chin, Yong Ho; Chao, Alexander Wu; Blaskiewicz, Michael M.; ...

    2017-07-28

    Effects of the chromaticity on head-tail instabilities for broadband impedances are comprehensively studied, using the two particle model, the Vlasov analysis and computer simulations. We show both in the two particle model and the Vlasov analysis with the trapezoidal (semiconstant) wake model that we can derive universal contour plots for the growth factor as a function of the two dimensionless parameters: the wakefield strength, Υ, and the difference of the betatron phase advances between the head and the tail, χ. They reveal how the chromaticity affects strong head-tail instabilities and excites head-tail instabilities. We also apply the LEP (Large Electron-Positronmore » Collider) broadband resonator model to the Vlasov approach and find that the results are in very good agreement with those of the trapezoidal wake model. The theoretical findings are also reinforced by the simulation results. In conclusion, the trapezoidal wake model turns out to be a very useful tool since it significantly simplifies the time domain analysis and provides well-behaved impedance at the same time.« less

  19. Orbital transfer rocket engine technology 7.5K-LB thrust rocket engine preliminary design

    NASA Technical Reports Server (NTRS)

    Harmon, T. J.; Roschak, E.

    1993-01-01

    A preliminary design of an advanced LOX/LH2 expander cycle rocket engine producing 7,500 lbf thrust for Orbital Transfer vehicle missions was completed. Engine system, component and turbomachinery analysis at both on design and off design conditions were completed. The preliminary design analysis results showed engine requirements and performance goals were met. Computer models are described and model outputs are presented. Engine system assembly layouts, component layouts and valve and control system analysis are presented. Major design technologies were identified and remaining issues and concerns were listed.

  20. Power-law exponent of the Bouchaud-Mézard model on regular random networks

    NASA Astrophysics Data System (ADS)

    Ichinomiya, Takashi

    2013-07-01

    We study the Bouchaud-Mézard model on a regular random network. By assuming adiabaticity and independency, and utilizing the generalized central limit theorem and the Tauberian theorem, we derive an equation that determines the exponent of the probability distribution function of the wealth as x→∞. The analysis shows that the exponent can be smaller than 2, while a mean-field analysis always gives the exponent as being larger than 2. The results of our analysis are shown to be in good agreement with those of the numerical simulations.

  1. Pharmacokinetic and pharmacodynamic model for analysis of adalimumab administered for Crohn's disease.

    PubMed

    Kimura, Koji; Yoshida, Atsushi; Takayanagi, Risa; Yamada, Yasuhiko

    2018-05-23

    Adalimumab (ADA) is used as a therapeutic agent for Crohn's disease (CD). Although that dosage regimen has been established through clinical trial experience, it has not been analyzed theoretically. In the present study, we analyzed of sequential changes of the Crohn's disease activity index (CDAI) after repeated administrations of ADA using a pharmacokinetic and pharmacodynamic model. In addition, we analyzed the validity of the dosage regimen, and potential efficacy gained by increasing the dose and reducing the interval of administration. The sequential changes in CDAI values obtained with our model were in good agreement with observed CDAI values, which was considered to show the validity of our analysis. We considered that our results showed the importance of the loading dose of ADA to obtain remission in an early stage of active CD. In addition, we showed that patients who have an incomplete response to ADA can obtain similar efficacy from increasing the dose and reducing the dose interval. In conclusion, our results showed that the present model may be applied to predict the CDAI values of ADA for CD. They indicated the validity of the dosage regimen, as well as the efficacy of increasing the dose and reducing the dose interval. This article is protected by copyright. All rights reserved.

  2. Divergent predictions of carbon storage between two global land models: attribution of the causes through traceability analysis

    NASA Astrophysics Data System (ADS)

    Rafique, Rashid; Xia, Jianyang; Hararuk, Oleksandra; Asrar, Ghassem R.; Leng, Guoyong; Wang, Yingping; Luo, Yiqi

    2016-07-01

    Representations of the terrestrial carbon cycle in land models are becoming increasingly complex. It is crucial to develop approaches for critical assessment of the complex model properties in order to understand key factors contributing to models' performance. In this study, we applied a traceability analysis which decomposes carbon cycle models into traceable components, for two global land models (CABLE and CLM-CASA') to diagnose the causes of their differences in simulating ecosystem carbon storage capacity. Driven with similar forcing data, CLM-CASA' predicted ˜ 31 % larger carbon storage capacity than CABLE. Since ecosystem carbon storage capacity is a product of net primary productivity (NPP) and ecosystem residence time (τE), the predicted difference in the storage capacity between the two models results from differences in either NPP or τE or both. Our analysis showed that CLM-CASA' simulated 37 % higher NPP than CABLE. On the other hand, τE, which was a function of the baseline carbon residence time (τ'E) and environmental effect on carbon residence time, was on average 11 years longer in CABLE than CLM-CASA'. This difference in τE was mainly caused by longer τ'E of woody biomass (23 vs. 14 years in CLM-CASA'), and higher proportion of NPP allocated to woody biomass (23 vs. 16 %). Differences in environmental effects on carbon residence times had smaller influences on differences in ecosystem carbon storage capacities compared to differences in NPP and τ'E. Overall, the traceability analysis showed that the major causes of different carbon storage estimations were found to be parameters setting related to carbon input and baseline carbon residence times between two models.

  3. Divergent predictions of carbon storage between two global land models: Attribution of the causes through traceability analysis

    DOE PAGES

    Rafique, Rashid; Xia, Jianyang; Hararuk, Oleksandra; ...

    2016-07-29

    Representations of the terrestrial carbon cycle in land models are becoming increasingly complex. It is crucial to develop approaches for critical assessment of the complex model properties in order to understand key factors contributing to models' performance. In this study, we applied a traceability analysis which decomposes carbon cycle models into traceable components, for two global land models (CABLE and CLM-CASA') to diagnose the causes of their differences in simulating ecosystem carbon storage capacity. Driven with similar forcing data, CLM-CASA' predicted – 31 % larger carbon storage capacity than CABLE. Since ecosystem carbon storage capacity is a product of net primary productivitymore » (NPP) and ecosystem residence time ( τ E), the predicted difference in the storage capacity between the two models results from differences in either NPP or τ E or both. Our analysis showed that CLM-CASA'simulated 37 % higher NPP than CABLE. On the other hand, τ E, which was a function of the baseline carbon residence time ( τ' E) and environmental effect on carbon residence time, was on average 11 years longer in CABLE than CLM-CASA'. This difference in τ E was mainly caused by longer τ' E of woody biomass (23 vs. 14 years in CLM-CASA'), and higher proportion of NPP allocated to woody biomass (23 vs. 16 %). Differences in environmental effects on carbon residence times had smaller influences on differences in ecosystem carbon storage capacities compared to differences in NPP and τ' E. Altogether, the traceability analysis showed that the major causes of different carbon storage estimations were found to be parameters setting related to carbon input and baseline carbon residence times between two models.« less

  4. Divergent predictions of carbon storage between two global land models: Attribution of the causes through traceability analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rafique, Rashid; Xia, Jianyang; Hararuk, Oleksandra

    Representations of the terrestrial carbon cycle in land models are becoming increasingly complex. It is crucial to develop approaches for critical assessment of the complex model properties in order to understand key factors contributing to models' performance. In this study, we applied a traceability analysis which decomposes carbon cycle models into traceable components, for two global land models (CABLE and CLM-CASA') to diagnose the causes of their differences in simulating ecosystem carbon storage capacity. Driven with similar forcing data, CLM-CASA' predicted – 31 % larger carbon storage capacity than CABLE. Since ecosystem carbon storage capacity is a product of net primary productivitymore » (NPP) and ecosystem residence time ( τ E), the predicted difference in the storage capacity between the two models results from differences in either NPP or τ E or both. Our analysis showed that CLM-CASA'simulated 37 % higher NPP than CABLE. On the other hand, τ E, which was a function of the baseline carbon residence time ( τ' E) and environmental effect on carbon residence time, was on average 11 years longer in CABLE than CLM-CASA'. This difference in τ E was mainly caused by longer τ' E of woody biomass (23 vs. 14 years in CLM-CASA'), and higher proportion of NPP allocated to woody biomass (23 vs. 16 %). Differences in environmental effects on carbon residence times had smaller influences on differences in ecosystem carbon storage capacities compared to differences in NPP and τ' E. Altogether, the traceability analysis showed that the major causes of different carbon storage estimations were found to be parameters setting related to carbon input and baseline carbon residence times between two models.« less

  5. Realism of Indian Summer Monsoon Simulation in a Quarter Degree Global Climate Model

    NASA Astrophysics Data System (ADS)

    Salunke, P.; Mishra, S. K.; Sahany, S.; Gupta, K.

    2017-12-01

    This study assesses the fidelity of Indian Summer Monsoon (ISM) simulations using a global model at an ultra-high horizontal resolution (UHR) of 0.25°. The model used was the atmospheric component of the Community Earth System Model version 1.2.0 (CESM 1.2.0) developed at the National Center for Atmospheric Research (NCAR). Precipitation and temperature over the Indian region were analyzed for a wide range of space and time scales to evaluate the fidelity of the model under UHR, with special emphasis on the ISM simulations during the period of June-through-September (JJAS). Comparing the UHR simulations with observed data from the India Meteorological Department (IMD) over the Indian land, it was found that 0.25° resolution significantly improved spatial rainfall patterns over many regions, including the Western Ghats and the South-Eastern peninsula as compared to the standard model resolution. Convective and large-scale rainfall components were analyzed using the European Centre for Medium Range Weather Forecast (ECMWF) Re-Analysis (ERA)-Interim (ERA-I) data and it was found that at 0.25° resolution, there was an overall increase in the large-scale component and an associated decrease in the convective component of rainfall as compared to the standard model resolution. Analysis of the diurnal cycle of rainfall suggests a significant improvement in the phase characteristics simulated by the UHR model as compared to the standard model resolution. Analysis of the annual cycle of rainfall, however, failed to show any significant improvement in the UHR model as compared to the standard version. Surface temperature analysis showed small improvements in the UHR model simulations as compared to the standard version. Thus, one may conclude that there are some significant improvements in the ISM simulations using a 0.25° global model, although there is still plenty of scope for further improvement in certain aspects of the annual cycle of rainfall.

  6. Testing and Analysis of Sensor Ports

    NASA Technical Reports Server (NTRS)

    Zhang, M.; Frendi, A.; Thompson, W.; Casiano, M. J.

    2016-01-01

    This Technical Publication summarizes the work focused on the testing and analysis of sensor ports. The tasks under this contract were divided into three areas: (1) Development of an Analytical Model, (2) Conducting a Set of Experiments, and (3) Obtaining Computational Solutions. Results from the experiment using both short and long sensor ports were obtained using harmonic, random, and frequency sweep plane acoustic waves. An amplification factor of the pressure signal between the port inlet and the back of the port is obtained and compared to models. Comparisons of model and experimental results showed very good agreement.

  7. AR(p) -based detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Alvarez-Ramirez, J.; Rodriguez, E.

    2018-07-01

    Autoregressive models are commonly used for modeling time-series from nature, economics and finance. This work explored simple autoregressive AR(p) models to remove long-term trends in detrended fluctuation analysis (DFA). Crude oil prices and bitcoin exchange rate were considered, with the former corresponding to a mature market and the latter to an emergent market. Results showed that AR(p) -based DFA performs similar to traditional DFA. However, the former DFA provides information on stability of long-term trends, which is valuable for understanding and quantifying the dynamics of complex time series from financial systems.

  8. Measuring the performance of Internet companies using a two-stage data envelopment analysis model

    NASA Astrophysics Data System (ADS)

    Cao, Xiongfei; Yang, Feng

    2011-05-01

    In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement.

  9. The Cognitive-Miser Response Model: Testing for Intuitive and Deliberate Reasoning

    ERIC Educational Resources Information Center

    Bockenholt, Ulf

    2012-01-01

    In a number of psychological studies, answers to reasoning vignettes have been shown to result from both intuitive and deliberate response processes. This paper utilizes a psychometric model to separate these two response tendencies. An experimental application shows that the proposed model facilitates the analysis of dual-process item responses…

  10. Variability of basin scale water resources indicators derived from global hydrological and land surface models

    NASA Astrophysics Data System (ADS)

    Werner, Micha; Blyth, Eleanor; Schellekens, Jaap

    2016-04-01

    Global hydrological and land-surface models are becoming increasingly available, and as the resolution of these improves, as well how hydrological processes are represented, so does their potential. These offer consistent datasets at the global scale, which can be used to establish water balances and derive policy relevant indicators in medium to large basins, including those that are poorly gauged. However, differences in model structure, model parameterisation, and model forcing may result in quite different indicator values being derived, depending on the model used. In this paper we explore indicators developed using four land surface models (LSM) and five global hydrological models (GHM). Results from these models have been made available through the Earth2Observe project, a recent research initiative funded by the European Union 7th Research Framework. All models have a resolution of 0.5 arc degrees, and are forced using the same WATCH-ERA-Interim (WFDEI) meteorological re-analysis data at a daily time step for the 32 year period from 1979 to 2012. We explore three water resources indicators; an aridity index, a simplified water exploitation index; and an indicator that calculates the frequency of occurrence of root zone stress. We compare indicators derived over selected areas/basins in Europe, Colombia, Southern Africa, the Indian Subcontinent and Australia/New Zealand. The hydrological fluxes calculated show quite significant differences between the nine models, despite the common forcing dataset, with these differences reflected in the indicators subsequently derived. The results show that the variability between models is related to the different climates types, with that variability quite logically depending largely on the availability of water. Patterns are also found in the type of models that dominate different parts of the distribution of the indicator values, with LSM models providing lower values, and GHM models providing higher values in some climates, and vice versa in others. How important this variability is in supporting a policy decision, depends largely on how a decision thresholds are set. For example in the case of the aridity index, with areas being denoted as arid with an index of 0.6 or above, we show that the variability is primarily of interest in transitional climates, such as the Mediterranean The analysis shows that while both LSM's and GHM's provide useful data, indices derived to support water resources management planning may differ substantially, depending on the model used. The analysis also identifies in which climates improvements to the models are particularly relevant to support the confidence with which decisions can be taken based on derived indicators.

  11. Evaluation of a Theory of Instructional Sequences for Physics Instruction

    NASA Astrophysics Data System (ADS)

    Wackermann, Rainer; Trendel, Georg; Fischer, Hans E.

    2010-05-01

    The background of the study is the theory of basis models of teaching and learning, a comprehensive set of models of learning processes which includes, for example, learning through experience and problem-solving. The combined use of different models of learning processes has not been fully investigated and it is frequently not clear under what circumstances a particular model should be used by teachers. In contrast, the theory under investigation here gives guidelines for choosing a particular model and provides instructional sequences for each model. The aim is to investigate the implementation of the theory applied to physics instruction and to show if possible effects for the students may be attributed to the use of the theory. Therefore, a theory-oriented education programme for 18 physics teachers was developed and implemented in the 2005/06 school year. The main features of the intervention consisted of coaching physics lessons and video analysis according to the theory. The study follows a pre-treatment-post design with non-equivalent control group. Findings of repeated-measures ANOVAs show large effects for teachers' subjective beliefs, large effects for classroom actions, and small to medium effects for student outcomes such as perceived instructional quality and student emotions. The teachers/classes that applied the theory especially well according to video analysis showed the larger effects. The results showed that differentiating between different models of learning processes improves physics instruction. Effects can be followed through to student outcomes. The education programme effect was clearer for classroom actions and students' outcomes than for teachers' beliefs.

  12. Model-independent curvature determination with 21 cm intensity mapping experiments

    NASA Astrophysics Data System (ADS)

    Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda

    2018-06-01

    Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.

  13. Interplanetary density models as inferred from solar Type III bursts

    NASA Astrophysics Data System (ADS)

    Oppeneiger, Lucas; Boudjada, Mohammed Y.; Lammer, Helmut; Lichtenegger, Herbert

    2016-04-01

    We report on the density models derived from spectral features of solar Type III bursts. They are generated by beams of electrons travelling outward from the Sun along open magnetic field lines. Electrons generate Langmuir waves at the plasma frequency along their ray paths through the corona and the interplanetary medium. A large frequency band is covered by the Type III bursts from several MHz down to few kHz. In this analysis, we consider the previous empirical density models proposed to describe the electron density in the interplanetary medium. We show that those models are mainly based on the analysis of Type III bursts generated in the interplanetary medium and observed by satellites (e.g. RAE, HELIOS, VOYAGER, ULYSSES,WIND). Those models are confronted to stereoscopic observations of Type III bursts recorded by WIND, ULYSSES and CASSINI spacecraft. We discuss the spatial evolution of the electron beam along the interplanetary medium where the trajectory is an Archimedean spiral. We show that the electron beams and the source locations are depending on the choose of the empirical density models.

  14. Dust Storm Monitoring Using Satellite Observatory and Numerical Modeling Analysis

    NASA Astrophysics Data System (ADS)

    Taghavi, Farahnaz

    In recent years, the frequency of dust pollution events in the Iran Southwest are increased which caused huge damage and imposed a negative impacts on air quality, airport traffic and people daily life in local areas. Dust storms in this area usually start with the formation of a low-pressure center over the Arabian Peninsula. The main objectives of this study is to asses and monitor the movement of aerosols and pollutions from origin source to local areas using satellite imagery and numerical modeling analysis. Observational analyses from NCEP such as synoptic data (Uwind,Vwind,Vorticity and Divergence Fields), upper air radiosonde, measured visibility distributions, land cover data are also used in model comparisons to show differences in occurrence of dust events. The evolution and dynamics of this phenomena are studied on the based a method to modify the initial state of NWP output using discrepancies between dynamic fields and WV imagery in a grid. Results show that satellite images offers a means to control the behavior of numeric models and also the model using land cover data improving the wind-blown dust modeling.

  15. Three Proposed Data Collection Models for Annual Inventories

    Treesearch

    Bill Bechtold; Ron McRoberts; Frank Spirek; Chuck Liff

    2005-01-01

    Three competing data collection models for the U.S. Department of Agriculture Forest Service Forest Inventory and Analysis (FIA) program?s annual inventories are presented. We show that in the presence of panel creep, the model now in place does not meet requirements of an annual inventory system mandated by the 1998 Farm Bill. Two data-collection models that use...

  16. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  17. Underwater striling engine design with modified one-dimensional model

    NASA Astrophysics Data System (ADS)

    Li, Daijin; Qin, Kan; Luo, Kai

    2015-09-01

    Stirling engines are regarded as an efficient and promising power system for underwater devices. Currently, many researches on one-dimensional model is used to evaluate thermodynamic performance of Stirling engine, but in which there are still some aspects which cannot be modeled with proper mathematical models such as mechanical loss or auxiliary power. In this paper, a four-cylinder double-acting Stirling engine for Unmanned Underwater Vehicles (UUVs) is discussed. And a one-dimensional model incorporated with empirical equations of mechanical loss and auxiliary power obtained from experiments is derived while referring to the Stirling engine computer model of National Aeronautics and Space Administration (NASA). The P-40 Stirling engine with sufficient testing results from NASA is utilized to validate the accuracy of this one-dimensional model. It shows that the maximum error of output power of theoretical analysis results is less than 18% over testing results, and the maximum error of input power is no more than 9%. Finally, a Stirling engine for UUVs is designed with Schmidt analysis method and the modified one-dimensional model, and the results indicate this designed engine is capable of showing desired output power.

  18. Influence of standardization on the precision (reproducibility) of dental cast analysis with virtual 3-dimensional models.

    PubMed

    Hayashi, Kazuo; Chung, Onejune; Park, Seojung; Lee, Seung-Pyo; Sachdeva, Rohit C L; Mizoguchi, Itaru

    2015-03-01

    Virtual 3-dimensional (3D) models obtained by scanning of physical casts have become an alternative to conventional dental cast analysis in orthodontic treatment. If the precision (reproducibility) of virtual 3D model analysis can be further improved, digital orthodontics could be even more widely accepted. The purpose of this study was to clarify the influence of "standardization" of the target points for dental cast analysis using virtual 3D models. Physical plaster models were also measured to obtain additional information. Five sets of dental casts were used. The dental casts were scanned with R700 (3Shape, Copenhagen, Denmark) and REXCAN DS2 3D (Solutionix, Seoul, Korea) scanners. In this study, 3 system and software packages were used: SureSmile (OraMetrix, Richardson, Tex), Rapidform (Inus, Seoul, Korea), and I-DEAS (SDRC, Milford, Conn). Without standardization, the maximum differences were observed between the SureSmile software and the Rapidform software (0.39 mm ± 0.07). With standardization, the maximum differences were observed between the SureSmile software and measurements with a digital caliper (0.099 mm ± 0.01), and this difference was significantly greater (P <0.05) than the 2 other mean difference values. Furthermore, the results of this study showed that the mean differences "WITH" standardization were significantly lower than those "WITHOUT" standardization for all systems, software packages, or methods. The results showed that elimination of the influence of usability or habituation is important for improving the reproducibility of dental cast analysis. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  19. Shift work, night work, and the risk of prostate cancer: A meta-analysis based on 9 cohort studies.

    PubMed

    Du, Hong-Bing; Bin, Kai-Yun; Liu, Wen-Hong; Yang, Feng-Sheng

    2017-11-01

    Epidemiology studies suggested that shift work or night work may be linked to prostate cancer (PCa); the relationship, however, remains controversy. PubMed, ScienceDirect, and Embase (Ovid) databases were searched before (started from the building of the databases) February 4, 2017 for eligible cohort studies. We pooled the evidence included by a random- or fixed-effect model, according to the heterogeneity. A predefined subgroup analysis was conducted to see the potential discrepancy between groups. Sensitivity analysis was used to test whether our results were stale. Nine cohort studies were eligible for meta-analysis with 2,570,790 male subjects. Our meta-analysis showed that, under the fixed-effect model, the pooled relevant risk (RR) of PCa was 1.05 (95% confidence interval [CI]: 1.00, 1.11; P = .06; I = 24.00%) for men who had ever engaged in night shift work; and under the random-effect model, the pooled RR was 1.08 (0.99, 1.17; P = .08; I = 24.00%). Subgroup analysis showed the RR of PCa among males in western countries was 1.05 (95% CI: 0.99, 1.11; P = .09; I = 0.00%), while among Asian countries it was 2.45 (95% CI: 1.19, 5.04; P = .02; I = 0.00%); and the RR was 1.04 (95% CI: 0.95, 1.14; P = .40; I = 29.20%) for the high-quality group compared with 1.21 (95% CI: 1.03, 1.41; P = .02; I = 0.00%) for the moderate/low-quality group. Sensitivity analysis showed robust results. Based on the current evidence of cohort studies, we found no obvious association between night shift work and PCa. However, our subgroup analysis suggests that night shift work may increase the risk of PCa in Asian men. Some evidence of a small study effect was observed in this meta-analysis.

  20. Development of a comprehensive model for stakeholder management in mental healthcare.

    PubMed

    Bierbooms, Joyce; Van Oers, Hans; Rijkers, Jeroen; Bongers, Inge

    2016-06-20

    Purpose - Stakeholder management is not yet incorporated into the standard practice of most healthcare providers. The purpose of this paper is to assess the applicability of a comprehensive model for stakeholder management in mental healthcare organization for more evidence-based (stakeholder) management. Design/methodology/approach - The assessment was performed in two research parts: the steps described in the model were executed in a single case study at a mental healthcare organization in the Netherlands; and a process and effect evaluation was done to find the supporting and impeding factors with regard to the applicability of the model. Interviews were held with managers and directors to evaluate the effectiveness of the model with a view to stakeholder management. Findings - The stakeholder analysis resulted in the identification of eight stakeholder groups. Different expectations were identified for each of these groups. The analysis on performance gaps revealed that stakeholders generally find the collaboration with a mental healthcare provider "sufficient." Finally a prioritization showed that five stakeholder groups were seen as "definite" stakeholders by the organization. Practical implications - The assessment of the model showed that it generated useful knowledge for more evidence-based (stakeholder) management. Adaptation of the model is needed to increase its feasibility in practice. Originality/value - Provided that the model is properly adapted for the specific field, the analysis can provide more knowledge on stakeholders and can help integrate stakeholder management as a comprehensive process in policy planning.

  1. Studies of oceanic tectonics based on GEOS-3 satellite altimetry

    NASA Technical Reports Server (NTRS)

    Poehls, K. A.; Kaula, W. M.; Schubert, G.; Sandwell, D.

    1979-01-01

    Using statistical analysis, geoidal admittance (the relationship between the ocean geoid and seafloor topography) obtained from GEOS-3 altimetry was compared to various model admittances. Analysis of several altimetry tracks in the Pacific Ocean demonstrated a low coherence between altimetry and seafloor topography except where the track crosses active or recent tectonic features. However, global statistical studies using the much larger data base of all available gravimetry showed a positive correlation of oceanic gravity with topography. The oceanic lithosphere was modeled by simultaneously inverting surface wave dispersion, topography, and gravity data. Efforts to incorporate geoid data into the inversion showed that the base of the subchannel can be better resolved with geoid rather than gravity data. Thermomechanical models of seafloor spreading taking into account differing plate velocities, heat source distributions, and rock rheologies were discussed.

  2. A Cross-Cultural Analysis of Personality Structure Through the Lens of the HEXACO Model.

    PubMed

    Ion, Andrei; Iliescu, Dragos; Aldhafri, Said; Rana, Neeti; Ratanadilok, Kattiya; Widyanti, Ari; Nedelcea, Cătălin

    2017-01-01

    Across 5 different samples, totaling more than 1,600 participants from India, Indonesia, Oman, Romania, and Thailand, the authors address the question of cross-cultural replicability of a personality structure, while exploring the utility of exploratory structural equation modeling (ESEM) as a data analysis technique in cross-cultural personality research. Personality was measured with an alternative, non-Five-Factor Model (FFM) personality framework, provided by the HEXACO-PI (Lee & Ashton, 2004 ). The results show that the HEXACO framework was replicated in some of the investigated cultures. The ESEM data analysis technique proved to be especially useful in investigating the between-group measurement equivalence of broad personality measures across different cultures.

  3. Analysis of the Multi Strategy Goal Programming for Micro-Grid Based on Dynamic ant Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Qiu, J. P.; Niu, D. X.

    Micro-grid is one of the key technologies of the future energy supplies. Take economic planning. reliability, and environmental protection of micro grid as a basis for the analysis of multi-strategy objective programming problems for micro grid which contains wind power, solar power, and battery and micro gas turbine. Establish the mathematical model of each power generation characteristics and energy dissipation. and change micro grid planning multi-objective function under different operating strategies to a single objective model based on AHP method. Example analysis shows that in combination with dynamic ant mixed genetic algorithm can get the optimal power output of this model.

  4. Detecting robust signals of interannual variability of gross primary productivity in Asia from multiple terrestrial carbon cycle models and long-term satellite-based vegetation data

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Kondo, M.; Ueyama, M.; Kato, T.; Ito, A.; Sasai, T.; Sato, H.; Kobayashi, H.; Saigusa, N.

    2014-12-01

    Long term record of satellite-based terrestrial vegetation are important to evaluate terrestrial carbon cycle models. In this study, we demonstrate how multiple satellite observation can be used for evaluating past changes in gross primary productivity (GPP) and detecting robust anomalies in terrestrial carbon cycle in Asia through our model-data synthesis analysis, Asia-MIP. We focused on the two different temporal coverages: long-term (30 years; 1982-2011) and decadal (10 years; 2001-2011; data intensive period) scales. We used a NOAA/AVHRR NDVI record for long-term analysis and multiple satellite data and products (e.g. Terra-MODIS, SPOT-VEGETATION) as historical satellite data, and multiple terrestrial carbon cycle models (e.g. BEAMS, Biome-BGC, ORCHIDEE, SEIB-DGVM, and VISIT). As a results of long-term (30 years) trend analysis, satellite-based time-series data showed that approximately 40% of the area has experienced a significant increase in the NDVI, while only a few areas have experienced a significant decreasing trend over the last 30 years. The increases in the NDVI were dominant in the sub-continental regions of Siberia, East Asia, and India. Simulations using the terrestrial biosphere models also showed significant increases in GPP, similar to the results for the NDVI, in boreal and temperate regions. A modeled sensitivity analysis showed that the increases in GPP are explained by increased temperature and precipitation in Siberia. Precipitation, solar radiation, CO2fertilization and land cover changes are important factors in the tropical regions. However, the relative contributions of each factor to GPP changes are different among the models. Year-to-year variations of terrestrial GPP were overall consistently captured by the satellite data and terrestrial carbon cycle models if the anomalies are large (e.g. 2003 summer GPP anomalies in East Asia and 2002 spring GPP anomalies in mid to high latitudes). The behind mechanisms can be consistently explained by the models if the anomalies are caused in the low temperature regions (e.g. spring in Northern Asia). However, water-driven or radiation-driven GPP anomalies lacks consistent explanation among models. Therefore, terrestrial carbon cycle models require improvement of the sensitivity of climate anomalies to carbon cycles.

  5. Modeling the Benchmark Active Control Technology Wind-Tunnel Model for Application to Flutter Suppression

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.

    1996-01-01

    This paper describes the formulation of a model of the dynamic behavior of the Benchmark Active Controls Technology (BACT) wind-tunnel model for application to design and analysis of flutter suppression controllers. The model is formed by combining the equations of motion for the BACT wind-tunnel model with actuator models and a model of wind-tunnel turbulence. The primary focus of this paper is the development of the equations of motion from first principles using Lagrange's equations and the principle of virtual work. A numerical form of the model is generated using values for parameters obtained from both experiment and analysis. A unique aspect of the BACT wind-tunnel model is that it has upper- and lower-surface spoilers for active control. Comparisons with experimental frequency responses and other data show excellent agreement and suggest that simple coefficient-based aerodynamics are sufficient to accurately characterize the aeroelastic response of the BACT wind-tunnel model. The equations of motion developed herein have been used to assist the design and analysis of a number of flutter suppression controllers that have been successfully implemented.

  6. Granger causality--statistical analysis under a configural perspective.

    PubMed

    von Eye, Alexander; Wiedermann, Wolfgang; Mun, Eun-Young

    2014-03-01

    The concept of Granger causality can be used to examine putative causal relations between two series of scores. Based on regression models, it is asked whether one series can be considered the cause for the second series. In this article, we propose extending the pool of methods available for testing hypotheses that are compatible with Granger causation by adopting a configural perspective. This perspective allows researchers to assume that effects exist for specific categories only or for specific sectors of the data space, but not for other categories or sectors. Configural Frequency Analysis (CFA) is proposed as the method of analysis from a configural perspective. CFA base models are derived for the exploratory analysis of Granger causation. These models are specified so that they parallel the regression models used for variable-oriented analysis of hypotheses of Granger causation. An example from the development of aggression in adolescence is used. The example shows that only one pattern of change in aggressive impulses over time Granger-causes change in physical aggression against peers.

  7. Global ozone and air quality: a multi-model assessment of risks to human health and crops

    NASA Astrophysics Data System (ADS)

    Ellingsen, K.; Gauss, M.; van Dingenen, R.; Dentener, F. J.; Emberson, L.; Fiore, A. M.; Schultz, M. G.; Stevenson, D. S.; Ashmore, M. R.; Atherton, C. S.; Bergmann, D. J.; Bey, I.; Butler, T.; Drevet, J.; Eskes, H.; Hauglustaine, D. A.; Isaksen, I. S. A.; Horowitz, L. W.; Krol, M.; Lamarque, J. F.; Lawrence, M. G.; van Noije, T.; Pyle, J.; Rast, S.; Rodriguez, J.; Savage, N.; Strahan, S.; Sudo, K.; Szopa, S.; Wild, O.

    2008-02-01

    Within ACCENT, a European Network of Excellence, eighteen atmospheric models from the U.S., Europe, and Japan calculated present (2000) and future (2030) concentrations of ozone at the Earth's surface with hourly temporal resolution. Comparison of model results with surface ozone measurements in 14 world regions indicates that levels and seasonality of surface ozone in North America and Europe are characterized well by global models, with annual average biases typically within 5-10 nmol/mol. However, comparison with rather sparse observations over some regions suggest that most models overestimate annual ozone by 15-20 nmol/mol in some locations. Two scenarios from the International Institute for Applied Systems Analysis (IIASA) and one from the Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (IPCC SRES) have been implemented in the models. This study focuses on changes in near-surface ozone and their effects on human health and vegetation. Different indices and air quality standards are used to characterise air quality. We show that often the calculated changes in the different indices are closely inter-related. Indices using lower thresholds are more consistent between the models, and are recommended for global model analysis. Our analysis indicates that currently about two-thirds of the regions considered do not meet health air quality standards, whereas only 2-4 regions remain below the threshold. Calculated air quality exceedances show moderate deterioration by 2030 if current emissions legislation is followed and slight improvements if current emissions reduction technology is used optimally. For the "business as usual" scenario severe air quality problems are predicted. We show that model simulations of air quality indices are particularly sensitive to how well ozone is represented, and improved accuracy is needed for future projections. Additional measurements are needed to allow a more quantitative assessment of the risks to human health and vegetation from changing levels of surface ozone.

  8. WebDISCO: a web service for distributed cox model learning without patient-level data sharing.

    PubMed

    Lu, Chia-Lun; Wang, Shuang; Ji, Zhanglong; Wu, Yuan; Xiong, Li; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2015-11-01

    The Cox proportional hazards model is a widely used method for analyzing survival data. To achieve sufficient statistical power in a survival analysis, it usually requires a large amount of data. Data sharing across institutions could be a potential workaround for providing this added power. The authors develop a web service for distributed Cox model learning (WebDISCO), which focuses on the proof-of-concept and algorithm development for federated survival analysis. The sensitive patient-level data can be processed locally and only the less-sensitive intermediate statistics are exchanged to build a global Cox model. Mathematical derivation shows that the proposed distributed algorithm is identical to the centralized Cox model. The authors evaluated the proposed framework at the University of California, San Diego (UCSD), Emory, and Duke. The experimental results show that both distributed and centralized models result in near-identical model coefficients with differences in the range [Formula: see text] to [Formula: see text]. The results confirm the mathematical derivation and show that the implementation of the distributed model can achieve the same results as the centralized implementation. The proposed method serves as a proof of concept, in which a publicly available dataset was used to evaluate the performance. The authors do not intend to suggest that this method can resolve policy and engineering issues related to the federated use of institutional data, but they should serve as evidence of the technical feasibility of the proposed approach.Conclusions WebDISCO (Web-based Distributed Cox Regression Model; https://webdisco.ucsd-dbmi.org:8443/cox/) provides a proof-of-concept web service that implements a distributed algorithm to conduct distributed survival analysis without sharing patient level data. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. COBRA-SFS thermal-hydraulic analysis code for spent fuel storage and transportation casks: Models and methods

    DOE PAGES

    Michener, Thomas E.; Rector, David R.; Cuta, Judith M.

    2017-09-01

    COBRA-SFS, a thermal-hydraulics code developed for steady-state and transient analysis of multi-assembly spent-fuel storage and transportation systems, has been incorporated into the Used Nuclear Fuel-Storage, Transportation and Disposal Analysis Resource and Data System tool as a module devoted to spent fuel package thermal analysis. This paper summarizes the basic formulation of the equations and models used in the COBRA-SFS code, showing that COBRA-SFS fully captures the important physical behavior governing the thermal performance of spent fuel storage systems, with internal and external natural convection flow patterns, and heat transfer by convection, conduction, and thermal radiation. Of particular significance is themore » capability for detailed thermal radiation modeling within the fuel rod array.« less

  10. COBRA-SFS thermal-hydraulic analysis code for spent fuel storage and transportation casks: Models and methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michener, Thomas E.; Rector, David R.; Cuta, Judith M.

    COBRA-SFS, a thermal-hydraulics code developed for steady-state and transient analysis of multi-assembly spent-fuel storage and transportation systems, has been incorporated into the Used Nuclear Fuel-Storage, Transportation and Disposal Analysis Resource and Data System tool as a module devoted to spent fuel package thermal analysis. This paper summarizes the basic formulation of the equations and models used in the COBRA-SFS code, showing that COBRA-SFS fully captures the important physical behavior governing the thermal performance of spent fuel storage systems, with internal and external natural convection flow patterns, and heat transfer by convection, conduction, and thermal radiation. Of particular significance is themore » capability for detailed thermal radiation modeling within the fuel rod array.« less

  11. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  12. Two Paradoxes in Linear Regression Analysis.

    PubMed

    Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong

    2016-12-25

    Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.

  13. A hierarchical model for regional analysis of population change using Christmas Bird Count data, with application to the American Black Duck

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.; Niven, D.K.

    2006-01-01

    Analysis of Christmas Bird Count (CBC) data is complicated by the need to account for variation in effort on counts and to provide summaries over large geographic regions. We describe a hierarchical model for analysis of population change using CBC data that addresses these needs. The effect of effort is modeled parametrically, with parameter values varying among strata as identically distributed random effects. Year and site effects are modeled hierarchically, accommodating large regional variation in number of samples and precision of estimates. The resulting model is complex, but a Bayesian analysis can be conducted using Markov chain Monte Carlo techniques. We analyze CBC data for American Black Ducks (Anas rubripes), a species of considerable management interest that has historically been monitored using winter surveys. Over the interval 1966-2003, Black Duck populations showed distinct regional patterns of population change. The patterns shown by CBC data are similar to those shown by the Midwinter Waterfowl Inventory for the United States.

  14. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  15. An extended car-following model to describe connected traffic dynamics under cyberattacks

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Yu, Guizhen; Wu, Xinkai; Qin, Hongmao; Wang, Yunpeng

    2018-04-01

    In this paper, the impacts of the potential cyberattacks on vehicles are modeled through an extended car-following model. To better understand the mechanism of traffic disturbance under cyberattacks, the linear and nonlinear stability analysis are conducted respectively. Particularly, linear stability analysis is performed to obtain different neutral stability conditions with various parameters; and nonlinear stability analysis is carried out by using reductive perturbation method to derive the soliton solution of the modified Korteweg de Vries equation (mKdV) near the critical point, which is used to draw coexisting stability lines. Furthermore, by applying linear and nonlinear stability analysis, traffic flow state can be divided into three states, i.e., stable, metastable and unstable states which are useful to describe shockwave dynamics and driving behaviors under cyberattacks. The theoretical results show that the proposed car-following model is capable of successfully describing the car-following behavior of connected vehicles with cyberattacks. Finally, numerical simulation using real values has confirmed the validity of theoretical analysis. The results further demonstrate our model can be used to help avoid collisions and relieve traffic congestion with cybersecurity threats.

  16. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  17. Nonlinear density wave investigation for an extended car-following model considering driver’s memory and jerk

    NASA Astrophysics Data System (ADS)

    Jin, Zhizhan; Li, Zhipeng; Cheng, Rongjun; Ge, Hongxia

    2018-01-01

    Based on the two velocity difference model (TVDM), an extended car-following model is developed to investigate the effect of driver’s memory and jerk on traffic flow in this paper. By using linear stability analysis, the stability conditions are derived. And through nonlinear analysis, the time-dependent Ginzburg-Landau (TDGL) equation and the modified Korteweg-de Vries (mKdV) equation are obtained, respectively. The mKdV equation is constructed to describe the traffic behavior near the critical point. The evolution of traffic congestion and the corresponding energy consumption are discussed. Numerical simulations show that the improved model is found not only to enhance the stability of traffic flow, but also to depress the energy consumption, which are consistent with the theoretical analysis.

  18. A Model for Simulating the Response of Aluminum Honeycomb Structure to Transverse Loading

    NASA Technical Reports Server (NTRS)

    Ratcliffe, James G.; Czabaj, Michael W.; Jackson, Wade C.

    2012-01-01

    A 1-dimensional material model was developed for simulating the transverse (thickness-direction) loading and unloading response of aluminum honeycomb structure. The model was implemented as a user-defined material subroutine (UMAT) in the commercial finite element analysis code, ABAQUS(Registered TradeMark)/Standard. The UMAT has been applied to analyses for simulating quasi-static indentation tests on aluminum honeycomb-based sandwich plates. Comparison of analysis results with data from these experiments shows overall good agreement. Specifically, analyses of quasi-static indentation tests yielded accurate global specimen responses. Predicted residual indentation was also in reasonable agreement with measured values. Overall, this simple model does not involve a significant computational burden, which makes it more tractable to simulate other damage mechanisms in the same analysis.

  19. Routes to failure: analysis of 41 civil aviation accidents from the Republic of China using the human factors analysis and classification system.

    PubMed

    Li, Wen-Chin; Harris, Don; Yu, Chung-San

    2008-03-01

    The human factors analysis and classification system (HFACS) is based upon Reason's organizational model of human error. HFACS was developed as an analytical framework for the investigation of the role of human error in aviation accidents, however, there is little empirical work formally describing the relationship between the components in the model. This research analyses 41 civil aviation accidents occurring to aircraft registered in the Republic of China (ROC) between 1999 and 2006 using the HFACS framework. The results show statistically significant relationships between errors at the operational level and organizational inadequacies at both the immediately adjacent level (preconditions for unsafe acts) and higher levels in the organization (unsafe supervision and organizational influences). The pattern of the 'routes to failure' observed in the data from this analysis of civil aircraft accidents show great similarities to that observed in the analysis of military accidents. This research lends further support to Reason's model that suggests that active failures are promoted by latent conditions in the organization. Statistical relationships linking fallible decisions in upper management levels were found to directly affect supervisory practices, thereby creating the psychological preconditions for unsafe acts and hence indirectly impairing the performance of pilots, ultimately leading to accidents.

  20. Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Amemiya, Yasuo

    2001-01-01

    Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)

  1. Dynamic fuzzy hierarchy analysis for evaluation of professionalization degree

    NASA Astrophysics Data System (ADS)

    Jin, Lin; Min, Luo; Ma, Jingxi

    2016-06-01

    This paper presents the model of dynamic fuzzy hierarchy analysis for evaluation of professionalization degree, as a combination of the dynamic fuzzy theory and the AHP, which can show the changes and trends of the value of each index of professionalization.

  2. Mission Analysis and Design for Space Based Inter-Satellite Laser Power Beaming

    DTIC Science & Technology

    2010-03-01

    56 4.3.1 Darwin Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 4.4 Obscuration Analysis...81 Appendix B. Additional Multi-Dimensional Darwin Plots from ModelCenter . 86 Appendix C. STK Access Report for... Darwin Data Explorer Window Showing Optimized Results in Tabular Form

  3. Evidence Regarding the Internal Structure: Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Lewis, Todd F.

    2017-01-01

    American Educational Research Association (AERA) standards stipulate that researchers show evidence of the internal structure of instruments. Confirmatory factor analysis (CFA) is one structural equation modeling procedure designed to assess construct validity of assessments that has broad applicability for counselors interested in instrument…

  4. Directional variance adjustment: bias reduction in covariance matrices based on factor analysis with an application to portfolio optimization.

    PubMed

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.

  5. Directional Variance Adjustment: Bias Reduction in Covariance Matrices Based on Factor Analysis with an Application to Portfolio Optimization

    PubMed Central

    Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven

    2013-01-01

    Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016

  6. Simulation of raw water and treatment parameters in support of the disinfection by-products regulatory impact analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regli, S.; Cromwell, J.; Mosher, J.

    The U.S. EPA has undertaken an effort to model how the water supply industry may respond to possible rules and how those responses may affect human health risk. The model is referred to as the Disinfection By-Product Regulatory Analysis Model (DBPRAM), The paper is concerned primarily with presenting and discussing the methods, underlying data, assumptions, limitations and results for the first part of the model. This part of the model shows the creation of sets of simulated water supplies that are representative of the conditions currently encountered by public water supplies with respect to certain raw water quality and watermore » treatment characteristics.« less

  7. TED analysis of the Si(113) surface structure

    NASA Astrophysics Data System (ADS)

    Suzuki, T.; Minoda, H.; Tanishiro, Y.; Yagi, K.

    1999-09-01

    We carried out a TED (transmission electron diffraction) analysis of the Si(113) surface structure. The TED patterns taken at room temperature showed reflections due to the 3×2 reconstructed structure. The TED pattern indicated that a glide plane parallel to the <332> direction suggested in some models is excluded. We calculated the R-factors (reliability factors) for six surface structure models proposed previously. All structure models with energy-optimized atomic positions have large R-factors. After revision of the atomic positions, the R-factors of all the structure models decreased below 0.3, and the revised version of Dabrowski's 3×2 model has the smallest R-factor of 0.17.

  8. Mathematical analysis of tuberculosis transmission model with delay

    NASA Astrophysics Data System (ADS)

    Lapaan, R. D.; Collera, J. A.; Addawe, J. M.

    2016-11-01

    In this paper, a delayed Tuberculosis infection model is formulated and investigated. We showed the existence of disease free equilibrium and endemic equilibrium points. We used La Salle-Lyapunov Invariance Principle to show that if the reproductive number R0 < 1, the disease-free equilibrium of the model is globally asymptotically stable. Numerical simulations are then performed to illustrate the existence of the disease free equilibrium and the endemic equilibrium point for a given value of R0. Thus, when R0 < 1, the disease dies out in the population.

  9. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    NASA Astrophysics Data System (ADS)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  10. The role of building models in the evaluation of heat-related risks

    NASA Astrophysics Data System (ADS)

    Buchin, Oliver; Jänicke, Britta; Meier, Fred; Scherer, Dieter; Ziegler, Felix

    2016-04-01

    Hazard-risk relationships in epidemiological studies are generally based on the outdoor climate, despite the fact that most of humans' lifetime is spent indoors. By coupling indoor and outdoor climates with a building model, the risk concept developed can still be based on the outdoor conditions but also includes exposure to the indoor climate. The influence of non-linear building physics and the impact of air conditioning on heat-related risks can be assessed in a plausible manner using this risk concept. For proof of concept, the proposed risk concept is compared to a traditional risk analysis. As an example, daily and city-wide mortality data of the age group 65 and older in Berlin, Germany, for the years 2001-2010 are used. Four building models with differing complexity are applied in a time-series regression analysis. This study shows that indoor hazard better explains the variability in the risk data compared to outdoor hazard, depending on the kind of building model. Simplified parameter models include the main non-linear effects and are proposed for the time-series analysis. The concept shows that the definitions of heat events, lag days, and acclimatization in a traditional hazard-risk relationship are influenced by the characteristics of the prevailing building stock.

  11. Numerical analysis of the formation process of aerosols in the alveoli

    NASA Astrophysics Data System (ADS)

    Haslbeck, Karsten; Seume, Jörg R.

    2008-11-01

    For a successful diagnosis of lung diseases through an analysis of non-volatile molecules in the exhaled breath, an exact understanding of the aerosol formation process is required. This process is modeled using Computational Fluid Dynamics (CFD). The model shows the interaction of the boundary surface between the streamed airway and the local epithelial liquid layer. A 2-D volume mesh of an alveolus is generated by taking into account the connection of the alveoli with the sacculi alveolares (SA). The Volume of Fluid (VOF) Method is used to model the interface between the gas and the liquid film. The non-Newtonian flow is modeled by the implementation of the Ostwald de Waele model. Surface tension is a function of the surfactant concentration. The VOF-Method allows the distribution of the concentration of the epithelial liquid layer at the surface to be traced in a transient manner. The simulations show the rupturing of the liquid film through the drop formation. Aerosol particles are ejected into the SA and do not collide with the walls. The quantity, the geometrical size as well as the velocity distributions of the generated aerosols are determined. The data presented in the paper provide the boundary conditions for future CFD analysis of the aerosol transport through the airways up to exhalation.

  12. An engineering and economic evaluation of quick germ-quick fiber process for dry-grind ethanol facilities: analysis.

    PubMed

    Rodríguez, Luis F; Li, Changying; Khanna, Madhu; Spaulding, Aslihan D; Lin, Tao; Eckhoff, Steven R

    2010-07-01

    An engineering economic model, which is mass balanced and compositionally driven, was developed to compare the conventional corn dry-grind process and the pre-fractionation process called quick germ-quick fiber (QQ). In this model, documented in a companion article, the distillers dried grains with solubles (DDGS) price was linked with its protein and fiber content as well as with the long-term average relationship with the corn price. The detailed economic analysis showed that the QQ plant retrofitted from conventional dry-grind ethanol plant reduces the manufacturing cost of ethanol by 13.5 cent/gallon and has net present value of nearly $4 million greater than the conventional dry-grind plant at an interest rate of 4% in 15years. Ethanol and feedstock price sensitivity analysis showed that the QQ plant gains more profits when ethanol price increases than conventional dry-grind ethanol plant. An optimistic analysis of the QQ process suggests that the greater value of the modified DDGS would provide greater resistance to fluctuations in corn price for QQ facilities. This model can be used to provide decision support for ethanol producers. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  13. Automated Vocal Analysis of Children with Hearing Loss and Their Typical and Atypical Peers

    PubMed Central

    VanDam, Mark; Oller, D. Kimbrough; Ambrose, Sophie E.; Gray, Sharmistha; Richards, Jeffrey A.; Xu, Dongxin; Gilkerson, Jill; Silbert, Noah H.; Moeller, Mary Pat

    2014-01-01

    Objectives This study investigated automatic assessment of vocal development in children with hearing loss as compared with children who are typically developing, have language delays, and autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children. Design The vocal analysis system analyzed over 1900 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic. Results Samples from children who were hard-of-hearing patterned more similarly to those of typically-developing children than to the language-delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis. Conclusions This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention. PMID:25587667

  14. Complexity analysis based on generalized deviation for financial markets

    NASA Astrophysics Data System (ADS)

    Li, Chao; Shang, Pengjian

    2018-03-01

    In this paper, a new modified method is proposed as a measure to investigate the correlation between past price and future volatility for financial time series, known as the complexity analysis based on generalized deviation. In comparison with the former retarded volatility model, the new approach is both simple and computationally efficient. The method based on the generalized deviation function presents us an exhaustive way showing the quantization of the financial market rules. Robustness of this method is verified by numerical experiments with both artificial and financial time series. Results show that the generalized deviation complexity analysis method not only identifies the volatility of financial time series, but provides a comprehensive way distinguishing the different characteristics between stock indices and individual stocks. Exponential functions can be used to successfully fit the volatility curves and quantify the changes of complexity for stock market data. Then we study the influence for negative domain of deviation coefficient and differences during the volatile periods and calm periods. after the data analysis of the experimental model, we found that the generalized deviation model has definite advantages in exploring the relationship between the historical returns and future volatility.

  15. Effects of exposure to malathion on blood glucose concentration: a meta-analysis.

    PubMed

    Ramirez-Vargas, Marco Antonio; Flores-Alfaro, Eugenia; Uriostegui-Acosta, Mayrut; Alvarez-Fitz, Patricia; Parra-Rojas, Isela; Moreno-Godinez, Ma Elena

    2018-02-01

    Exposure to malathion (an organophosphate pesticide widely used around the world) has been associated with alterations in blood glucose concentration in animal models. However, the results are inconsistent. The aim of this meta-analysis was to evaluate whether malathion exposure can disturb the concentrations of blood glucose in exposed rats. We performed a literature search of online databases including PubMed, EBSCO, and Google Scholar and reviewed original articles that analyzed the relation between malathion exposure and glucose levels in animal models. The selection of articles was based on inclusion and exclusion criteria. The database search identified thirty-five possible articles, but only eight fulfilled our inclusion criteria, and these studies were included in the meta-analysis. The effect of malathion on blood glucose concentration showed a non-monotonic dose-response curve. In addition, pooled analysis showed that blood glucose concentrations were 3.3-fold higher in exposed rats than in the control group (95% CI, 2-5; Z = 3.9; p < 0.0001) in a random-effect model. This result suggested that alteration of glucose homeostasis is a possible mechanism of toxicity associated with exposure to malathion.

  16. [Analyzing and modeling methods of near infrared spectroscopy for in-situ prediction of oil yield from oil shale].

    PubMed

    Liu, Jie; Zhang, Fu-Dong; Teng, Fei; Li, Jun; Wang, Zhi-Hong

    2014-10-01

    In order to in-situ detect the oil yield of oil shale, based on portable near infrared spectroscopy analytical technology, with 66 rock core samples from No. 2 well drilling of Fuyu oil shale base in Jilin, the modeling and analyzing methods for in-situ detection were researched. By the developed portable spectrometer, 3 data formats (reflectance, absorbance and K-M function) spectra were acquired. With 4 different modeling data optimization methods: principal component-mahalanobis distance (PCA-MD) for eliminating abnormal samples, uninformative variables elimination (UVE) for wavelength selection and their combina- tions: PCA-MD + UVE and UVE + PCA-MD, 2 modeling methods: partial least square (PLS) and back propagation artificial neural network (BPANN), and the same data pre-processing, the modeling and analyzing experiment were performed to determine the optimum analysis model and method. The results show that the data format, modeling data optimization method and modeling method all affect the analysis precision of model. Results show that whether or not using the optimization method, reflectance or K-M function is the proper spectrum format of the modeling database for two modeling methods. Using two different modeling methods and four different data optimization methods, the model precisions of the same modeling database are different. For PLS modeling method, the PCA-MD and UVE + PCA-MD data optimization methods can improve the modeling precision of database using K-M function spectrum data format. For BPANN modeling method, UVE, UVE + PCA-MD and PCA- MD + UVE data optimization methods can improve the modeling precision of database using any of the 3 spectrum data formats. In addition to using the reflectance spectra and PCA-MD data optimization method, modeling precision by BPANN method is better than that by PLS method. And modeling with reflectance spectra, UVE optimization method and BPANN modeling method, the model gets the highest analysis precision, its correlation coefficient (Rp) is 0.92, and its standard error of prediction (SEP) is 0.69%.

  17. How to interpret the results of medical time series data analysis: Classical statistical approaches versus dynamic Bayesian network modeling.

    PubMed

    Onisko, Agnieszka; Druzdzel, Marek J; Austin, R Marshall

    2016-01-01

    Classical statistics is a well-established approach in the analysis of medical data. While the medical community seems to be familiar with the concept of a statistical analysis and its interpretation, the Bayesian approach, argued by many of its proponents to be superior to the classical frequentist approach, is still not well-recognized in the analysis of medical data. The goal of this study is to encourage data analysts to use the Bayesian approach, such as modeling with graphical probabilistic networks, as an insightful alternative to classical statistical analysis of medical data. This paper offers a comparison of two approaches to analysis of medical time series data: (1) classical statistical approach, such as the Kaplan-Meier estimator and the Cox proportional hazards regression model, and (2) dynamic Bayesian network modeling. Our comparison is based on time series cervical cancer screening data collected at Magee-Womens Hospital, University of Pittsburgh Medical Center over 10 years. The main outcomes of our comparison are cervical cancer risk assessments produced by the three approaches. However, our analysis discusses also several aspects of the comparison, such as modeling assumptions, model building, dealing with incomplete data, individualized risk assessment, results interpretation, and model validation. Our study shows that the Bayesian approach is (1) much more flexible in terms of modeling effort, and (2) it offers an individualized risk assessment, which is more cumbersome for classical statistical approaches.

  18. Can Optimism, Pessimism, Hope, Treatment Credibility and Treatment Expectancy Be Distinguished in Patients Undergoing Total Hip and Total Knee Arthroplasty?

    PubMed Central

    Haanstra, Tsjitske M.; Tilbury, Claire; Kamper, Steven J.; Tordoir, Rutger L.; Vliet Vlieland, Thea P. M.; Nelissen, Rob G. H. H.; Cuijpers, Pim; de Vet, Henrica C. W.; Dekker, Joost; Knol, Dirk L.; Ostelo, Raymond W.

    2015-01-01

    Objectives The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Methods Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. Results The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Conclusion Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance. PMID:26214176

  19. Can Optimism, Pessimism, Hope, Treatment Credibility and Treatment Expectancy Be Distinguished in Patients Undergoing Total Hip and Total Knee Arthroplasty?

    PubMed

    Haanstra, Tsjitske M; Tilbury, Claire; Kamper, Steven J; Tordoir, Rutger L; Vliet Vlieland, Thea P M; Nelissen, Rob G H H; Cuijpers, Pim; de Vet, Henrica C W; Dekker, Joost; Knol, Dirk L; Ostelo, Raymond W

    2015-01-01

    The constructs optimism, pessimism, hope, treatment credibility and treatment expectancy are associated with outcomes of medical treatment. While these constructs are grounded in different theoretical models, they nonetheless show some conceptual overlap. The purpose of this study was to examine whether currently available measurement instruments for these constructs capture the conceptual differences between these constructs within a treatment setting. Patients undergoing Total Hip and Total Knee Arthroplasty (THA and TKA) (Total N = 361; 182 THA; 179 TKA), completed the Life Orientation Test-Revised for optimism and pessimism, the Hope Scale, the Credibility Expectancy Questionnaire for treatment credibility and treatment expectancy. Confirmatory factor analysis was used to examine whether the instruments measure distinct constructs. Four theory-driven models with one, two, four and five latent factors were evaluated using multiple fit indices and Δχ2 tests, followed by some posthoc models. The results of the theory driven confirmatory factor analysis showed that a five factor model in which all constructs loaded on separate factors yielded the most optimal and satisfactory fit. Posthoc, a bifactor model in which (besides the 5 separate factors) a general factor is hypothesized accounting for the commonality of the items showed a significantly better fit than the five factor model. All specific factors, except for the hope factor, showed to explain a substantial amount of variance beyond the general factor. Based on our primary analyses we conclude that optimism, pessimism, hope, treatment credibility and treatment expectancy are distinguishable in THA and TKA patients. Postdoc, we determined that all constructs, except hope, showed substantial specific variance, while also sharing some general variance.

  20. Discrete time modeling and stability analysis of TCP Vegas

    NASA Astrophysics Data System (ADS)

    You, Byungyong; Koo, Kyungmo; Lee, Jin S.

    2007-12-01

    This paper presents an analysis method for TCP Vegas network model with single link and single source. Some papers showed global stability of several network models, but those models are not a dual problem where dynamics both exist in sources and links such as TCP Vegas. Other papers studied TCP Vegas as a dual problem, but it did not fully derive an asymptotic stability region. Therefore we analyze TCP Vegas with Jury's criterion which is necessary and sufficient condition. So we use state space model in discrete time and by using Jury's criterion, we could find an asymptotic stability region of TCP Vegas network model. This result is verified by ns-2 simulation. And by comparing with other results, we could know our method performed well.

  1. Portable Life Support Subsystem Thermal Hydraulic Performance Analysis

    NASA Technical Reports Server (NTRS)

    Barnes, Bruce; Pinckney, John; Conger, Bruce

    2010-01-01

    This paper presents the current state of the thermal hydraulic modeling efforts being conducted for the Constellation Space Suit Element (CSSE) Portable Life Support Subsystem (PLSS). The goal of these efforts is to provide realistic simulations of the PLSS under various modes of operation. The PLSS thermal hydraulic model simulates the thermal, pressure, flow characteristics, and human thermal comfort related to the PLSS performance. This paper presents modeling approaches and assumptions as well as component model descriptions. Results from the models are presented that show PLSS operations at steady-state and transient conditions. Finally, conclusions and recommendations are offered that summarize results, identify PLSS design weaknesses uncovered during review of the analysis results, and propose areas for improvement to increase model fidelity and accuracy.

  2. Modeling polyvinyl chloride Plasma Modification by Neural Networks

    NASA Astrophysics Data System (ADS)

    Wang, Changquan

    2018-03-01

    Neural networks model were constructed to analyze the connection between dielectric barrier discharge parameters and surface properties of material. The experiment data were generated from polyvinyl chloride plasma modification by using uniform design. Discharge voltage, discharge gas gap and treatment time were as neural network input layer parameters. The measured values of contact angle were as the output layer parameters. A nonlinear mathematical model of the surface modification for polyvinyl chloride was developed based upon the neural networks. The optimum model parameters were obtained by the simulation evaluation and error analysis. The results of the optimal model show that the predicted value is very close to the actual test value. The prediction model obtained here are useful for discharge plasma surface modification analysis.

  3. Multimodal Image Analysis in Alzheimer’s Disease via Statistical Modelling of Non-local Intensity Correlations

    NASA Astrophysics Data System (ADS)

    Lorenzi, Marco; Simpson, Ivor J.; Mendelson, Alex F.; Vos, Sjoerd B.; Cardoso, M. Jorge; Modat, Marc; Schott, Jonathan M.; Ourselin, Sebastien

    2016-04-01

    The joint analysis of brain atrophy measured with magnetic resonance imaging (MRI) and hypometabolism measured with positron emission tomography with fluorodeoxyglucose (FDG-PET) is of primary importance in developing models of pathological changes in Alzheimer’s disease (AD). Most of the current multimodal analyses in AD assume a local (spatially overlapping) relationship between MR and FDG-PET intensities. However, it is well known that atrophy and hypometabolism are prominent in different anatomical areas. The aim of this work is to describe the relationship between atrophy and hypometabolism by means of a data-driven statistical model of non-overlapping intensity correlations. For this purpose, FDG-PET and MRI signals are jointly analyzed through a computationally tractable formulation of partial least squares regression (PLSR). The PLSR model is estimated and validated on a large clinical cohort of 1049 individuals from the ADNI dataset. Results show that the proposed non-local analysis outperforms classical local approaches in terms of predictive accuracy while providing a plausible description of disease dynamics: early AD is characterised by non-overlapping temporal atrophy and temporo-parietal hypometabolism, while the later disease stages show overlapping brain atrophy and hypometabolism spread in temporal, parietal and cortical areas.

  4. Analysis and compensation for the effect of the catheter position on image intensities in intravascular optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Liu, Shengnan; Eggermont, Jeroen; Wolterbeek, Ron; Broersen, Alexander; Busk, Carol A. G. R.; Precht, Helle; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2016-12-01

    Intravascular optical coherence tomography (IVOCT) is an imaging technique that is used to analyze the underlying cause of cardiovascular disease. Because a catheter is used during imaging, the intensities can be affected by the catheter position. This work aims to analyze the effect of the catheter position on IVOCT image intensities and to propose a compensation method to minimize this effect in order to improve the visualization and the automatic analysis of IVOCT images. The effect of catheter position is modeled with respect to the distance between the catheter and the arterial wall (distance-dependent factor) and the incident angle onto the arterial wall (angle-dependent factor). A light transmission model incorporating both factors is introduced. On the basis of this model, the interaction effect of both factors is estimated with a hierarchical multivariant linear regression model. Statistical analysis shows that IVOCT intensities are significantly affected by both factors with p<0.001, as either aspect increases the intensity decreases. This effect differs for different pullbacks. The regression results were used to compensate for this effect. Experiments show that the proposed compensation method can improve the performance of the automatic bioresorbable vascular scaffold strut detection.

  5. Estimation of real-time runway surface contamination using flight data recorder parameters

    NASA Astrophysics Data System (ADS)

    Curry, Donovan

    Within this research effort, the development of an analytic process for friction coefficient estimation is presented. Under static equilibrium, the sum of forces and moments acting on the aircraft, in the aircraft body coordinate system, while on the ground at any instant is equal to zero. Under this premise the longitudinal, lateral and normal forces due to landing are calculated along with the individual deceleration components existent when an aircraft comes to a rest during ground roll. In order to validate this hypothesis a six degree of freedom aircraft model had to be created and landing tests had to be simulated on different surfaces. The simulated aircraft model includes a high fidelity aerodynamic model, thrust model, landing gear model, friction model and antiskid model. Three main surfaces were defined in the friction model; dry, wet and snow/ice. Only the parameters recorded by an FDR are used directly from the aircraft model all others are estimated or known a priori. The estimation of unknown parameters is also presented in the research effort. With all needed parameters a comparison and validation with simulated and estimated data, under different runway conditions, is performed. Finally, this report presents results of a sensitivity analysis in order to provide a measure of reliability of the analytic estimation process. Linear and non-linear sensitivity analysis has been performed in order to quantify the level of uncertainty implicit in modeling estimated parameters and how they can affect the calculation of the instantaneous coefficient of friction. Using the approach of force and moment equilibrium about the CG at landing to reconstruct the instantaneous coefficient of friction appears to be a reasonably accurate estimate when compared to the simulated friction coefficient. This is also true when the FDR and estimated parameters are introduced to white noise and when crosswind is introduced to the simulation. After the linear analysis the results show the minimum frequency at which the algorithm still provides moderately accurate data is at 2Hz. In addition, the linear analysis shows that with estimated parameters increased and decreased up to 25% at random, high priority parameters have to be accurate to within at least +/-5% to have an effect of less than 1% change in the average coefficient of friction. Non-linear analysis results show that the algorithm can be considered reasonably accurate for all simulated cases when inaccuracies in the estimated parameters vary randomly and simultaneously up to +/-27%. At worst-case the maximum percentage change in average coefficient of friction is less than 10% for all surfaces.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faranda, Davide, E-mail: davide.faranda@cea.fr; Dubrulle, Bérengère; Daviaud, François

    We introduce a novel way to extract information from turbulent datasets by applying an Auto Regressive Moving Average (ARMA) statistical analysis. Such analysis goes well beyond the analysis of the mean flow and of the fluctuations and links the behavior of the recorded time series to a discrete version of a stochastic differential equation which is able to describe the correlation structure in the dataset. We introduce a new index Υ that measures the difference between the resulting analysis and the Obukhov model of turbulence, the simplest stochastic model reproducing both Richardson law and the Kolmogorov spectrum. We test themore » method on datasets measured in a von Kármán swirling flow experiment. We found that the ARMA analysis is well correlated with spatial structures of the flow, and can discriminate between two different flows with comparable mean velocities, obtained by changing the forcing. Moreover, we show that the Υ is highest in regions where shear layer vortices are present, thereby establishing a link between deviations from the Kolmogorov model and coherent structures. These deviations are consistent with the ones observed by computing the Hurst exponents for the same time series. We show that some salient features of the analysis are preserved when considering global instead of local observables. Finally, we analyze flow configurations with multistability features where the ARMA technique is efficient in discriminating different stability branches of the system.« less

  7. Idiopathic Pulmonary Fibrosis: Data-driven Textural Analysis of Extent of Fibrosis at Baseline and 15-Month Follow-up.

    PubMed

    Humphries, Stephen M; Yagihashi, Kunihiro; Huckleberry, Jason; Rho, Byung-Hak; Schroeder, Joyce D; Strand, Matthew; Schwarz, Marvin I; Flaherty, Kevin R; Kazerooni, Ella A; van Beek, Edwin J R; Lynch, David A

    2017-10-01

    Purpose To evaluate associations between pulmonary function and both quantitative analysis and visual assessment of thin-section computed tomography (CT) images at baseline and at 15-month follow-up in subjects with idiopathic pulmonary fibrosis (IPF). Materials and Methods This retrospective analysis of preexisting anonymized data, collected prospectively between 2007 and 2013 in a HIPAA-compliant study, was exempt from additional institutional review board approval. The extent of lung fibrosis at baseline inspiratory chest CT in 280 subjects enrolled in the IPF Network was evaluated. Visual analysis was performed by using a semiquantitative scoring system. Computer-based quantitative analysis included CT histogram-based measurements and a data-driven textural analysis (DTA). Follow-up CT images in 72 of these subjects were also analyzed. Univariate comparisons were performed by using Spearman rank correlation. Multivariate and longitudinal analyses were performed by using a linear mixed model approach, in which models were compared by using asymptotic χ 2 tests. Results At baseline, all CT-derived measures showed moderate significant correlation (P < .001) with pulmonary function. At follow-up CT, changes in DTA scores showed significant correlation with changes in both forced vital capacity percentage predicted (ρ = -0.41, P < .001) and diffusing capacity for carbon monoxide percentage predicted (ρ = -0.40, P < .001). Asymptotic χ 2 tests showed that inclusion of DTA score significantly improved fit of both baseline and longitudinal linear mixed models in the prediction of pulmonary function (P < .001 for both). Conclusion When compared with semiquantitative visual assessment and CT histogram-based measurements, DTA score provides additional information that can be used to predict diminished function. Automatic quantification of lung fibrosis at CT yields an index of severity that correlates with visual assessment and functional change in subjects with IPF. © RSNA, 2017.

  8. Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion

    PubMed Central

    Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.

    2016-01-01

    Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391

  9. Failure analysis of energy storage spring in automobile composite brake chamber

    NASA Astrophysics Data System (ADS)

    Luo, Zai; Wei, Qing; Hu, Xiaofeng

    2015-02-01

    This paper set energy storage spring of parking brake cavity, part of automobile composite brake chamber, as the research object. And constructed the fault tree model of energy storage spring which caused parking brake failure based on the fault tree analysis method. Next, the parking brake failure model of energy storage spring was established by analyzing the working principle of composite brake chamber. Finally, the data of working load and the push rod stroke measured by comprehensive test-bed valve was used to validate the failure model above. The experimental result shows that the failure model can distinguish whether the energy storage spring is faulted.

  10. Uncertainty analysis of hydrological modeling in a tropical area using different algorithms

    NASA Astrophysics Data System (ADS)

    Rafiei Emam, Ammar; Kappas, Martin; Fassnacht, Steven; Linh, Nguyen Hoang Khanh

    2018-01-01

    Hydrological modeling outputs are subject to uncertainty resulting from different sources of errors (e.g., error in input data, model structure, and model parameters), making quantification of uncertainty in hydrological modeling imperative and meant to improve reliability of modeling results. The uncertainty analysis must solve difficulties in calibration of hydrological models, which further increase in areas with data scarcity. The purpose of this study is to apply four uncertainty analysis algorithms to a semi-distributed hydrological model, quantifying different source of uncertainties (especially parameter uncertainty) and evaluate their performance. In this study, the Soil and Water Assessment Tools (SWAT) eco-hydrological model was implemented for the watershed in the center of Vietnam. The sensitivity of parameters was analyzed, and the model was calibrated. The uncertainty analysis for the hydrological model was conducted based on four algorithms: Generalized Likelihood Uncertainty Estimation (GLUE), Sequential Uncertainty Fitting (SUFI), Parameter Solution method (ParaSol) and Particle Swarm Optimization (PSO). The performance of the algorithms was compared using P-factor and Rfactor, coefficient of determination (R 2), the Nash Sutcliffe coefficient of efficiency (NSE) and Percent Bias (PBIAS). The results showed the high performance of SUFI and PSO with P-factor>0.83, R-factor <0.56 and R 2>0.91, NSE>0.89, and 0.18

  11. Study of a tri-trophic prey-dependent food chain model of interacting populations.

    PubMed

    Haque, Mainul; Ali, Nijamuddin; Chakravarty, Santabrata

    2013-11-01

    The current paper accounts for the influence of intra-specific competition among predators in a prey dependent tri-trophic food chain model of interacting populations. We offer a detailed mathematical analysis of the proposed food chain model to illustrate some of the significant results that has arisen from the interplay of deterministic ecological phenomena and processes. Biologically feasible equilibria of the system are observed and the behaviours of the system around each of them are described. In particular, persistence, stability (local and global) and bifurcation (saddle-node, transcritical, Hopf-Andronov) analysis of this model are obtained. Relevant results from previous well known food chain models are compared with the current findings. Global stability analysis is also carried out by constructing appropriate Lyapunov functions. Numerical simulations show that the present system is capable enough to produce chaotic dynamics when the rate of self-interaction is very low. On the other hand such chaotic behaviour disappears for a certain value of the rate of self interaction. In addition, numerical simulations with experimented parameters values confirm the analytical results and shows that intra-specific competitions bears a potential role in controlling the chaotic dynamics of the system; and thus the role of self interactions in food chain model is illustrated first time. Finally, a discussion of the ecological applications of the analytical and numerical findings concludes the paper. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. A novel visual saliency analysis model based on dynamic multiple feature combination strategy

    NASA Astrophysics Data System (ADS)

    Lv, Jing; Ye, Qi; Lv, Wen; Zhang, Libao

    2017-06-01

    The human visual system can quickly focus on a small number of salient objects. This process was known as visual saliency analysis and these salient objects are called focus of attention (FOA). The visual saliency analysis mechanism can be used to extract the salient regions and analyze saliency of object in an image, which is time-saving and can avoid unnecessary costs of computing resources. In this paper, a novel visual saliency analysis model based on dynamic multiple feature combination strategy is introduced. In the proposed model, we first generate multi-scale feature maps of intensity, color and orientation features using Gaussian pyramids and the center-surround difference. Then, we evaluate the contribution of all feature maps to the saliency map according to the area of salient regions and their average intensity, and attach different weights to different features according to their importance. Finally, we choose the largest salient region generated by the region growing method to perform the evaluation. Experimental results show that the proposed model cannot only achieve higher accuracy in saliency map computation compared with other traditional saliency analysis models, but also extract salient regions with arbitrary shapes, which is of great value for the image analysis and understanding.

  13. Variable selection based near infrared spectroscopy quantitative and qualitative analysis on wheat wet gluten

    NASA Astrophysics Data System (ADS)

    Lü, Chengxu; Jiang, Xunpeng; Zhou, Xingfan; Zhang, Yinqiao; Zhang, Naiqian; Wei, Chongfeng; Mao, Wenhua

    2017-10-01

    Wet gluten is a useful quality indicator for wheat, and short wave near infrared spectroscopy (NIRS) is a high performance technique with the advantage of economic rapid and nondestructive test. To study the feasibility of short wave NIRS analyzing wet gluten directly from wheat seed, 54 representative wheat seed samples were collected and scanned by spectrometer. 8 spectral pretreatment method and genetic algorithm (GA) variable selection method were used to optimize analysis. Both quantitative and qualitative model of wet gluten were built by partial least squares regression and discriminate analysis. For quantitative analysis, normalization is the optimized pretreatment method, 17 wet gluten sensitive variables are selected by GA, and GA model performs a better result than that of all variable model, with R2V=0.88, and RMSEV=1.47. For qualitative analysis, automatic weighted least squares baseline is the optimized pretreatment method, all variable models perform better results than those of GA models. The correct classification rates of 3 class of <24%, 24-30%, >30% wet gluten content are 95.45, 84.52, and 90.00%, respectively. The short wave NIRS technique shows potential for both quantitative and qualitative analysis of wet gluten for wheat seed.

  14. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science.

    PubMed

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets.

  15. Advanced Online Survival Analysis Tool for Predictive Modelling in Clinical Data Science

    PubMed Central

    Montes-Torres, Julio; Subirats, José Luis; Ribelles, Nuria; Urda, Daniel; Franco, Leonardo; Alba, Emilio; Jerez, José Manuel

    2016-01-01

    One of the prevailing applications of machine learning is the use of predictive modelling in clinical survival analysis. In this work, we present our view of the current situation of computer tools for survival analysis, stressing the need of transferring the latest results in the field of machine learning to biomedical researchers. We propose a web based software for survival analysis called OSA (Online Survival Analysis), which has been developed as an open access and user friendly option to obtain discrete time, predictive survival models at individual level using machine learning techniques, and to perform standard survival analysis. OSA employs an Artificial Neural Network (ANN) based method to produce the predictive survival models. Additionally, the software can easily generate survival and hazard curves with multiple options to personalise the plots, obtain contingency tables from the uploaded data to perform different tests, and fit a Cox regression model from a number of predictor variables. In the Materials and Methods section, we depict the general architecture of the application and introduce the mathematical background of each of the implemented methods. The study concludes with examples of use showing the results obtained with public datasets. PMID:27532883

  16. Investigation of market efficiency and Financial Stability between S&P 500 and London Stock Exchange: Monthly and yearly Forecasting of Time Series Stock Returns using ARMA model

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Nassir Zadeh, Farzaneh

    2016-08-01

    We investigated the presence and changes in, long memory features in the returns and volatility dynamics of S&P 500 and London Stock Exchange using ARMA model. Recently, multifractal analysis has been evolved as an important way to explain the complexity of financial markets which can hardly be described by linear methods of efficient market theory. In financial markets, the weak form of the efficient market hypothesis implies that price returns are serially uncorrelated sequences. In other words, prices should follow a random walk behavior. The random walk hypothesis is evaluated against alternatives accommodating either unifractality or multifractality. Several studies find that the return volatility of stocks tends to exhibit long-range dependence, heavy tails, and clustering. Because stochastic processes with self-similarity possess long-range dependence and heavy tails, it has been suggested that self-similar processes be employed to capture these characteristics in return volatility modeling. The present study applies monthly and yearly forecasting of Time Series Stock Returns in S&P 500 and London Stock Exchange using ARMA model. The statistical analysis of S&P 500 shows that the ARMA model for S&P 500 outperforms the London stock exchange and it is capable for predicting medium or long horizons using real known values. The statistical analysis in London Stock Exchange shows that the ARMA model for monthly stock returns outperforms the yearly. ​A comparison between S&P 500 and London Stock Exchange shows that both markets are efficient and have Financial Stability during periods of boom and bust.

  17. Evaluation of risk factors for perforated peptic ulcer.

    PubMed

    Yamamoto, Kazuki; Takahashi, Osamu; Arioka, Hiroko; Kobayashi, Daiki

    2018-02-15

    The aim of this study was to evaluate the prediction factors for perforated peptic ulcer (PPU). At St. Luke's International Hospital in Tokyo, Japan, a case control study was performed between August 2004 and March 2016. All patients diagnosed with PPU were included. As control subjects, patients with age, sex and date of CT scan corresponding to those of the PPU subjects were included in the study at a proportion of 2 controls for every PPU subject. All data such as past medical histories, physical findings, and laboratory data were collected through chart reviews. Univariate analyses and multivariate analyses with logistic regression were conducted, and receiver operating characteristic curves (ROCs) were calculated to show validity. Sensitivity analyses were performed to confirm results using a stepwise method and conditional logistic regression. A total of 408 patients were included in this study; 136 were a group of patients with PPU, and 272 were a control group. Univariate analysis showed statistical significance in many categories. Four different models of multivariate analyses were conducted, and significant differences were found for muscular defense and a history of peptic ulcer disease (PUD) in all models. The conditional forced-entry analysis of muscular defense showed an odds ratio (OR) of 23.8 (95% confidence interval [CI]: 5.70-100.0), and the analysis of PUD history showed an OR of 6.40 (95% CI: 1.13-36.2). The sensitivity analysis showed consistent results, with an OR of 23.8-366.2 for muscular defense and an OR of 3.67-7.81 for PUD history. The area under the curve (AUC) of all models was high enough to confirm the results. However, anticoagulants, known risk factors for PUD, did not increase the risk for PPU in our study. The conditional forced-entry analysis of anticoagulant use showed an OR of 0.85 (95% CI: 0.03-22.3). The evaluation of prediction factors and development of a prediction rule for PPU may help our decision making in performing a CT scan for patients with acute abdominal pain.

  18. Service-oriented model-encapsulation strategy for sharing and integrating heterogeneous geo-analysis models in an open web environment

    NASA Astrophysics Data System (ADS)

    Yue, Songshan; Chen, Min; Wen, Yongning; Lu, Guonian

    2016-04-01

    Earth environment is extremely complicated and constantly changing; thus, it is widely accepted that the use of a single geo-analysis model cannot accurately represent all details when solving complex geo-problems. Over several years of research, numerous geo-analysis models have been developed. However, a collaborative barrier between model providers and model users still exists. The development of cloud computing has provided a new and promising approach for sharing and integrating geo-analysis models across an open web environment. To share and integrate these heterogeneous models, encapsulation studies should be conducted that are aimed at shielding original execution differences to create services which can be reused in the web environment. Although some model service standards (such as Web Processing Service (WPS) and Geo Processing Workflow (GPW)) have been designed and developed to help researchers construct model services, various problems regarding model encapsulation remain. (1) The descriptions of geo-analysis models are complicated and typically require rich-text descriptions and case-study illustrations, which are difficult to fully represent within a single web request (such as the GetCapabilities and DescribeProcess operations in the WPS standard). (2) Although Web Service technologies can be used to publish model services, model users who want to use a geo-analysis model and copy the model service into another computer still encounter problems (e.g., they cannot access the model deployment dependencies information). This study presents a strategy for encapsulating geo-analysis models to reduce problems encountered when sharing models between model providers and model users and supports the tasks with different web service standards (e.g., the WPS standard). A description method for heterogeneous geo-analysis models is studied. Based on the model description information, the methods for encapsulating the model-execution program to model services and for describing model-service deployment information are also included in the proposed strategy. Hence, the model-description interface, model-execution interface and model-deployment interface are studied to help model providers and model users more easily share, reuse and integrate geo-analysis models in an open web environment. Finally, a prototype system is established, and the WPS standard is employed as an example to verify the capability and practicability of the model-encapsulation strategy. The results show that it is more convenient for modellers to share and integrate heterogeneous geo-analysis models in cloud computing platforms.

  19. Statistical analysis of temperature data sampled at Station-M in the Norwegian Sea

    NASA Astrophysics Data System (ADS)

    Lorentzen, Torbjørn

    2014-02-01

    The paper analyzes sea temperature data sampled at Station-M in the Norwegian Sea. The data cover the period 1948-2010. The following questions are addressed: What type of stochastic process characterizes the temperature series? Are there any changes or patterns which indicate climate change? Are there any characteristics in the data which can be linked to the shrinking sea-ice in the Arctic area? Can the series be modeled consistently and applied in forecasting of the future sea temperature? The paper applies the following methods: Augmented Dickey-Fuller tests for testing of unit-root and stationarity, ARIMA-models in univariate modeling, cointegration and error-correcting models are applied in estimating short- and long-term dynamics of non-stationary series, Granger-causality tests in analyzing the interaction pattern between the deep and upper layer temperatures, and simultaneous equation systems are applied in forecasting future temperature. The paper shows that temperature at 2000 m Granger-causes temperature at 150 m, and that the 2000 m series can represent an important information carrier of the long-term development of the sea temperature in the geographical area. Descriptive statistics shows that the temperature level has been on a positive trend since the beginning of the 1980s which is also measured in most of the oceans in the North Atlantic. The analysis shows that the temperature series are cointegrated which means they share the same long-term stochastic trend and they do not diverge too far from each other. The measured long-term temperature increase is one of the factors that can explain the shrinking summer sea-ice in the Arctic region. The analysis shows that there is a significant negative correlation between the shrinking sea ice and the sea temperature at Station-M. The paper shows that the temperature forecasts are conditioned on the properties of the stochastic processes, causality pattern between the variables and specification of model, respectively. The estimated models forecast that temperature at 150 m is expected to increase by 0.018 °C per year, while deep water temperature at 2000 m is expected to increase between 0.0022 and 0.0024 °C per year.

  20. Analyzing Multiple Outcomes in Clinical Research Using Multivariate Multilevel Models

    PubMed Central

    Baldwin, Scott A.; Imel, Zac E.; Braithwaite, Scott R.; Atkins, David C.

    2014-01-01

    Objective Multilevel models have become a standard data analysis approach in intervention research. Although the vast majority of intervention studies involve multiple outcome measures, few studies use multivariate analysis methods. The authors discuss multivariate extensions to the multilevel model that can be used by psychotherapy researchers. Method and Results Using simulated longitudinal treatment data, the authors show how multivariate models extend common univariate growth models and how the multivariate model can be used to examine multivariate hypotheses involving fixed effects (e.g., does the size of the treatment effect differ across outcomes?) and random effects (e.g., is change in one outcome related to change in the other?). An online supplemental appendix provides annotated computer code and simulated example data for implementing a multivariate model. Conclusions Multivariate multilevel models are flexible, powerful models that can enhance clinical research. PMID:24491071

  1. How can we reduce phosphorus export from lowland polders? Implications from a sensitivity analysis of a coupled model.

    PubMed

    Huang, Jiacong; Gao, Junfeng; Yan, Renhua

    2016-08-15

    Phosphorus (P) export from lowland polders has caused severe water pollution. Numerical models are an important resource that help water managers control P export. This study coupled three models, i.e., Phosphorus Dynamic model for Polders (PDP), Integrated Catchments model of Phosphorus dynamics (INCA-P) and Universal Soil Loss Equation (USLE), to describe the P dynamics in polders. Based on the coupled models and a dataset collected from Polder Jian in China, sensitivity analysis were carried out to analyze the cause-effect relationships between environmental factors and P export from Polder Jian. The sensitivity analysis results showed that P export from Polder Jian were strongly affected by air temperature, precipitation and fertilization. Proper fertilization management should be a strategic priority for reducing P export from Polder Jian. This study demonstrated the success of model coupling, and its application in investigating potential strategies to support pollution control in polder systems. Copyright © 2016. Published by Elsevier B.V.

  2. A stochastic evolutionary model generating a mixture of exponential distributions

    NASA Astrophysics Data System (ADS)

    Fenner, Trevor; Levene, Mark; Loizou, George

    2016-02-01

    Recent interest in human dynamics has stimulated the investigation of the stochastic processes that explain human behaviour in various contexts, such as mobile phone networks and social media. In this paper, we extend the stochastic urn-based model proposed in [T. Fenner, M. Levene, G. Loizou, J. Stat. Mech. 2015, P08015 (2015)] so that it can generate mixture models, in particular, a mixture of exponential distributions. The model is designed to capture the dynamics of survival analysis, traditionally employed in clinical trials, reliability analysis in engineering, and more recently in the analysis of large data sets recording human dynamics. The mixture modelling approach, which is relatively simple and well understood, is very effective in capturing heterogeneity in data. We provide empirical evidence for the validity of the model, using a data set of popular search engine queries collected over a period of 114 months. We show that the survival function of these queries is closely matched by the exponential mixture solution for our model.

  3. Seismic slope-performance analysis: from hazard map to decision support system

    USGS Publications Warehouse

    Miles, Scott B.; Keefer, David K.; Ho, Carlton L.

    1999-01-01

    In response to the growing recognition of engineers and decision-makers of the regional effects of earthquake-induced landslides, this paper presents a general approach to conducting seismic landslide zonation, based on the popular Newmark's sliding block analogy for modeling coherent landslides. Four existing models based on the sliding block analogy are compared. The comparison shows that the models forecast notably different levels of slope performance. Considering this discrepancy along with the limitations of static maps as a decision tool, a spatial decision support system (SDSS) for seismic landslide analysis is proposed, which will support investigations over multiple scales for any number of earthquake scenarios and input conditions. Most importantly, the SDSS will allow use of any seismic landslide analysis model and zonation approach. Developments associated with the SDSS will produce an object-oriented model for encapsulating spatial data, an object-oriented specification to allow construction of models using modular objects, and a direct-manipulation, dynamic user-interface that adapts to the particular seismic landslide model configuration.

  4. Beyond Nature versus Nurture: DF Analysis of Nonshared Influences on Problem Behaviors.

    ERIC Educational Resources Information Center

    Rodgers, Joseph Lee; And Others

    1994-01-01

    Two studies of twin, sibling, and cousin pairs among children attempted to extend a model, called DF analysis, which separates hereditary and shared environmental influences. First study supported the validity of DF analysis applied to kinship height and weight data. The second study showed that spanking, reading, and quality of home environment…

  5. Suitability of a three-dimensional model to measure empathy and its relationship with social and normative adjustment in Spanish adolescents: a cross-sectional study

    PubMed Central

    Gómez-Ortiz, Olga; Ortega-Ruiz, Rosario; Jolliffe, Darrick; Romera, Eva M.

    2017-01-01

    Objectives (1) To examine the psychometric properties of the Basic Empathy Scale (BES) with Spanish adolescents, comparing a two and a three-dimensional structure;(2) To analyse the relationship between the three-dimensional empathy and social and normative adjustment in school. Design Transversal and ex post facto retrospective study. Confirmatory factorial analysis, multifactorial invariance analysis and structural equations models were used. Participants 747 students (51.3% girls) from Cordoba, Spain, aged 12–17 years (M=13.8; SD=1.21). Results The original two-dimensional structure was confirmed (cognitive empathy, affective empathy), but a three-dimensional structure showed better psychometric properties, highlighting the good fit found in confirmatory factorial analysis and adequate internal consistent valued, measured with Cronbach’s alpha and McDonald’s omega. Composite reliability and average variance extracted showed better indices for a three-factor model. The research also showed evidence of measurement invariance across gender. All the factors of the final three-dimensional BES model were direct and significantly associated with social and normative adjustment, being most strongly related to cognitive empathy. Conclusions This research supports the advances in neuroscience, developmental psychology and psychopathology through a three-dimensional version of the BES, which represents an improvement in the original two-factorial model. The organisation of empathy in three factors benefits the understanding of social and normative adjustment in adolescents, in which emotional disengagement favours adjusted peer relationships. Psychoeducational interventions aimed at improving the quality of social life in schools should target these components of empathy. PMID:28951400

  6. Multifractal analysis of a GCM climate

    NASA Astrophysics Data System (ADS)

    Carl, P.

    2003-04-01

    Multifractal analysis using the Wavelet Transform Modulus Maxima (WTMM) approach is being applied to the climate of a Mintz--Arakawa type, coarse resolution, two--layer AGCM. The model shows a backwards running period multiplication scenario throughout the northern summer, subsequent to a 'hard', subcritical Hopf bifurcation late in spring. This 'route out of chaos' (seen in cross sections of a toroidal phase space structure) is born in the planetary monsoon system which inflates the seasonal 'cycle' into these higher order structures and is blamed for the pronounced intraseasonal--to--centennial model climate variability. Previous analyses of the latter using advanced modal decompositions showed regularity based patterns in the time--frequency plane which are qualitatively similar to those obtained from the real world. The closer look here at the singularity structures, as a fundamental diagnostic supplement, aims at both more complete understanding (and quantification) of the model's qualitative dynamics and search for further tools of model intercomparison and verification in this respect. Analysing wavelet is the 10th derivative of the Gaussian which might suffice to suppress regular patterns in the data. Intraseasonal attractors, studied in time series of model precipitation over Central India, show shifting and braodening singularity spectra towards both more violent extreme events (premonsoon--monsoon transition) and weaker events (late summer to postmonsoon transition). Hints at a fractal basin boundary are found close to transition from period--2 to period--1 in the monsoon activity cycle. Interannual analyses are provided for runs with varied solar constants. To address the (in--)stationarity issue, first results are presented with a windowed multifractal analysis of longer--term runs ("singularity spectrogram").

  7. KIC 8164262: a heartbeat star showing tidally induced pulsations with resonant locking

    NASA Astrophysics Data System (ADS)

    Hambleton, K.; Fuller, J.; Thompson, S.; Prša, A.; Kurtz, D. W.; Shporer, A.; Isaacson, H.; Howard, A. W.; Endl, M.; Cochran, W.; Murphy, S. J.

    2018-02-01

    We present the analysis of KIC 8164262, a heartbeat star with a high-amplitude (∼1 mmag), tidally resonant pulsation (a mode in resonance with the orbit) at 229 times the orbital frequency and a plethora of tidally induced g-mode pulsations (modes excited by the orbit). The analysis combines Kepler light curves with follow-up spectroscopic data from the Keck telescope, KPNO (Kitt Peak National Observatory) 4-m Mayall telescope and the 2.7-m telescope at the McDonald observatory. We apply the binary modelling software, PHOEBE, to the Kepler light curve and radial velocity data to determine a detailed binary star model that includes the prominent pulsation and Doppler boosting, alongside the usual attributes of a binary star model (including tidal distortion and reflection). The results show that the system contains a slightly evolved F star with an M secondary companion in a highly eccentric orbit (e = 0.886). We use the results of the binary star model in a companion paper (Fuller) where we show that the prominent pulsation can be explained by a tidally excited oscillation mode held near resonance by a resonance locking mechanism.

  8. Spherical Cap Harmonic Modelling of 400 Years of Secular Variation in the South-west Pacific

    NASA Astrophysics Data System (ADS)

    Ingham, M.; Alfheid, M.; Ingham, E. M.; Turner, G. M.

    2014-12-01

    Historical magnetic data recorded in ship's logs on voyages of exploration and trade in the south-west Pacific have been used as a basis for constructing a model of secular variation in the region using spherical cap harmonic (SCH) analysis. The spherical cap used is centred on colatitude 115° and longitude 160° and has a radius of 50°, thus covering New Zealand, Australia and parts of Antarctica. Gaps in the observational data have been filled by an iterative procedure started by using IGRF field values to obtain SCH models for 2000, 1950 and 1900 and assuming that the spherical cap coefficients have a linear variation in time over the 400 year time period of the model, as is observed to a first approximation for Gauss coefficients calculated from a global spherical harmonic analysis. The resulting field models have generally smooth spatial and temporal variations in declination, inclination and intensity which show some differences from the variations calculated using the global spherical harmonic model gufm1. The technique clearly shows promise for producing more refined models of secular variation in the south-west Pacific when the historical data are supplemented by archeomagnetic and paleomagnetic data.

  9. A study of a diffusive model of asset returns and an empirical analysis of financial markets

    NASA Astrophysics Data System (ADS)

    Alejandro Quinones, Angel Luis

    A diffusive model for market dynamics is studied and the predictions of the model are compared to real financial markets. The model has a non-constant diffusion coefficient which depends both on the asset value and the time. A general solution for the distribution of returns is obtained and shown to match the results of computer simulations for two simple cases, piecewise linear and quadratic diffusion. The effects of discreteness in the market dynamics on the model are also studied. For the quadratic diffusion case, a type of phase transition leading to fat tails is observed as the discrete distribution approaches the continuum limit. It is also found that the model captures some of the empirical stylized facts observed in real markets, including fat-tails and scaling behavior in the distribution of returns. An analysis of empirical data for the EUR/USD currency exchange rate and the S&P 500 index is performed. Both markets show time scaling behavior consistent with a value of 1/2 for the Hurst exponent. Finally, the results show that the distribution of returns for the two markets is well fitted by the model, and the corresponding empirical diffusion coefficients are determined.

  10. Altered functional brain connectivity in children and young people with opsoclonus-myoclonus syndrome.

    PubMed

    Chekroud, Adam M; Anand, Geetha; Yong, Jean; Pike, Michael; Bridge, Holly

    2017-01-01

    Opsoclonus-myoclonus syndrome (OMS) is a rare, poorly understood condition that can result in long-term cognitive, behavioural, and motor sequelae. Several studies have investigated structural brain changes associated with this condition, but little is known about changes in function. This study aimed to investigate changes in brain functional connectivity in patients with OMS. Seven patients with OMS and 10 age-matched comparison participants underwent 3T magnetic resonance imaging (MRI) to acquire resting-state functional MRI data (whole-brain echo-planar images; 2mm isotropic voxels; multiband factor ×2) for a cross-sectional study. A seed-based analysis identified brain regions in which signal changes over time correlated with the cerebellum. Model-free analysis was used to determine brain networks showing altered connectivity. In patients with OMS, the motor cortex showed significantly reduced connectivity, and the occipito-parietal region significantly increased connectivity with the cerebellum relative to the comparison group. A model-free analysis also showed extensive connectivity within a visual network, including the cerebellum and basal ganglia, not present in the comparison group. No other networks showed any differences between groups. Patients with OMS showed reduced connectivity between the cerebellum and motor cortex, but increased connectivity with occipito-parietal regions. This pattern of change supports widespread brain involvement in OMS. © 2016 Mac Keith Press.

  11. Image analysis and modeling in medical image computing. Recent developments and advances.

    PubMed

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.

  12. Viscoplastic analysis of an experimental cylindrical thrust chamber liner

    NASA Technical Reports Server (NTRS)

    Arya, Vinod K.; Arnold, Steven M.

    1991-01-01

    A viscoplastic stress-strain analysis of an experimental cylindrical thrust chamber is presented. A viscoelastic constitutive model incorporating a single internal state variable that represents kinematic hardening was employed to investigate whether such a viscoplastic model could predict the experimentally observed behavior of the thrust chamber. Two types of loading cycles were considered: a short cycle of 3.5 sec. duration that corresponded to the experiments, and an extended loading cycle of 485.1 sec. duration that is typical of the Space Shuttle Main Engine (SSME) operating cycle. The analysis qualitatively replicated the deformation behavior of the component as observed in experiments designed to simulate SSME operating conditions. The analysis also showed that the mode and location in the component may depend on the loading cycle. The results indicate that using viscoplastic models for structural analysis can lead to a more realistic life assessment of thrust chambers.

  13. Meta-analysis in Stata using gllamm.

    PubMed

    Bagos, Pantelis G

    2015-12-01

    There are several user-written programs for performing meta-analysis in Stata (Stata Statistical Software: College Station, TX: Stata Corp LP). These include metan, metareg, mvmeta, and glst. However, there are several cases for which these programs do not suffice. For instance, there is no software for performing univariate meta-analysis with correlated estimates, for multilevel or hierarchical meta-analysis, or for meta-analysis of longitudinal data. In this work, we show with practical applications that many disparate models, including but not limited to the ones mentioned earlier, can be fitted using gllamm. The software is very versatile and can handle a wide variety of models with applications in a wide range of disciplines. The method presented here takes advantage of these modeling capabilities and makes use of appropriate transformations, based on the Cholesky decomposition of the inverse of the covariance matrix, known as generalized least squares, in order to handle correlated data. The models described earlier can be thought of as special instances of a general linear mixed-model formulation, but to the author's knowledge, a general exposition in order to incorporate all the available models for meta-analysis as special cases and the instructions to fit them in Stata has not been presented so far. Source code is available at http:www.compgen.org/tools/gllamm. Copyright © 2015 John Wiley & Sons, Ltd.

  14. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  15. Error rate information in attention allocation pilot models

    NASA Technical Reports Server (NTRS)

    Faulkner, W. H.; Onstott, E. D.

    1977-01-01

    The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.

  16. F-14 modeling study

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Baron, S.

    1984-01-01

    Preliminary results in the application of a closed loop pilot/simulator model to the analysis of some simulator fidelity issues are discussed in the context of an air to air target tracking task. The closed loop model is described briefly. Then, problem simplifications that are employed to reduce computational costs are discussed. Finally, model results showing sensitivity of performance to various assumptions concerning the simulator and/or the pilot are presented.

  17. A quarter of a century of the DBQ: some supplementary notes on its validity with regard to accidents.

    PubMed

    de Winter, Joost C F; Dodou, Dimitra; Stanton, Neville A

    2015-01-01

    This article synthesises the latest information on the relationship between the Driver Behaviour Questionnaire (DBQ) and accidents. We show by means of computer simulation that correlations with accidents are necessarily small because accidents are rare events. An updated meta-analysis on the zero-order correlations between the DBQ and self-reported accidents yielded an overall r of .13 (fixed-effect and random-effects models) for violations (57,480 participants; 67 samples) and .09 (fixed-effect and random-effects models) for errors (66,028 participants; 56 samples). An analysis of a previously published DBQ dataset (975 participants) showed that by aggregating across four measurement occasions, the correlation coefficient with self-reported accidents increased from .14 to .24 for violations and from .11 to .19 for errors. Our meta-analysis also showed that DBQ violations (r = .24; 6353 participants; 20 samples) but not DBQ errors (r = - .08; 1086 participants; 16 samples) correlated with recorded vehicle speed. Practitioner Summary: The DBQ is probably the most widely used self-report questionnaire in driver behaviour research. This study shows that DBQ violations and errors correlate moderately with self-reported traffic accidents.

  18. Full waveform seismic modelling of Chalk Group rocks from the Danish North Sea - implications for velocity analysis

    NASA Astrophysics Data System (ADS)

    Montazeri, Mahboubeh; Moreau, Julien; Uldall, Anette; Nielsen, Lars

    2015-04-01

    This study aims at understanding seismic wave propagation in the fine-layered Chalk Group, which constitutes the main reservoir for oil and gas production in the Danish North Sea. The starting point of our analysis is the Nana-1XP exploration well, which shows strong seismic contrasts inside the Chalk Group. For the purposes of seismic waveform modelling, we here assume a one-dimensional model with homogeneous and isotropic layers designed to capture the main fluctuations in petrophysical properties observed in the well logs. The model is representative of the stratigraphic sequences of the area and it illustrates highly contrasting properties of the Chalk Group. Finite-difference (FD) full wave technique, both acoustic and elastic equations are applied to the model. Velocity analysis of seismic data is a crucial step for stacking, multiple suppression, migration, and depth conversion of the seismic record. Semblance analysis of the synthetic seismic records shows strong amplitude peaks outside the expected range for the time interval representing the Chalk Group, especially at the base. The various synthetic results illustrate the occurrence and the impact of different types of waves including multiples, converted waves and refracted waves. The interference of these different wave types with the primary reflections can explain the strong anomalous amplitudes in the semblance plot. In particular, the effect of strongly contrasting thin beds plays an important role in the generation of the high anomalous amplitude values. If these anomalous amplitudes are used to pick the velocities, it would impede proper stacking of the data and may result in sub-optimal migration and depth conversion. Consequently this may lead to erroneous or sub-optimal seismic images of the Chalk Group and the underlying layers. Our results highlight the importance of detailed velocity analysis and proper picking of velocity functions in the Chalk Group intervals. We show that application of standard front mutes in the mid- and far-offset ranges does not significantly improve the results of the standard semblance analysis. These synthetic modelling results could be used as starting points for defining optimized processing flows for the seismic data sets acquired in the study area with the aim of improving the imaging of the Chalk Group.

  19. Comparative analysis of bleeding risk by the location and shape of arachnoid cysts: a finite element model analysis.

    PubMed

    Lee, Chang-Hyun; Han, In Seok; Lee, Ji Yeoun; Phi, Ji Hoon; Kim, Seung-Ki; Kim, Young-Eun; Wang, Kyu-Chang

    2017-01-01

    Although arachnoid cysts (ACs) are observed in various locations, only sylvian ACs are mainly regarded to be associated with bleeding. The reason for this selective association of sylvian ACs with bleeding is not understood well. This study is to investigate the effect of the location and shape of ACs on the risk of bleeding. A developed finite element model of the head/brain was modified for models of sylvian, suprasellar, and posterior fossa ACs. A spherical AC was placed at each location to compare the effect of AC location. Bowl-shaped and oval-shaped AC models were developed to compare the effect by shape. The shear force on the spot-weld elements (SFSW) was measured between the dura and the outer wall of the ACs or the comparable arachnoid membrane in the normal model. All AC models revealed higher SFSW than comparable normal models. By location, sylvian AC displayed the highest SFSW for frontal and lateral impacts. By shape, small outer wall AC models showed higher SFSW than large wall models in sylvian area and lower SFSW than large ones in posterior fossa. In regression analysis, the presence of AC was the only independent risk of bleeding. The bleeding mechanism of ACs is very complex, and the risk quantification failed to show a significant role of location and shape of ACs. The presence of AC increases shear force on impact condition and may be a risk factor of bleeding, and sylvian location of AC may not have additive risks of AC bleeding.

  20. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  1. On the analysis of competitive displacement in dengue disease transmission

    NASA Astrophysics Data System (ADS)

    Wijaya, Karunia P.; Nuraini, Nuning; Soewono, Edy; Handayani, Dewi

    2014-03-01

    We study a host-vector model involving the interplay of competitive displacement mechanism in a specific DENV serotype, both in human blood and mosquito blood. Using phylogenetic analysis, world virologists investigate the severe manifestations of dengue fever caused by the displacements within weakly virulent pathogens (native strains) by more virulent pathogens (invasive strains) in one serotype. We construct SIR model for human and SI model for mosquito to explore the key determinants of those displacements. Analysis of nonnegativity and boundedness of the solution as well as the basic reproduction number (R0) are taken into account for verifying the model into biological meaningfulness. To generate predictions of the outcomes of control strategies, we derive an optimal control model which involves two control apparatus: fluid infusion (for human) and fumigation (for vector). Numerical results show the dynamics of host-vector in an observation period, both under control and without control.

  2. Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker

    NASA Technical Reports Server (NTRS)

    Cabell, Randolph H.

    2008-01-01

    Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.

  3. Model-Driven Safety Analysis of Closed-Loop Medical Systems

    PubMed Central

    Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup

    2013-01-01

    In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure. PMID:24177176

  4. Model-Driven Safety Analysis of Closed-Loop Medical Systems.

    PubMed

    Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup

    2012-10-26

    In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure.

  5. Study on SOC wavelet analysis for LiFePO4 battery

    NASA Astrophysics Data System (ADS)

    Liu, Xuepeng; Zhao, Dongmei

    2017-08-01

    Improving the prediction accuracy of SOC can reduce the complexity of the conservative and control strategy of the strategy such as the scheduling, optimization and planning of LiFePO4 battery system. Based on the analysis of the relationship between the SOC historical data and the external stress factors, the SOC Estimation-Correction Prediction Model based on wavelet analysis is established. Using wavelet neural network prediction model is of high precision to achieve forecast link, external stress measured data is used to update parameters estimation in the model, implement correction link, makes the forecast model can adapt to the LiFePO4 battery under rated condition of charge and discharge the operating point of the variable operation area. The test results show that the method can obtain higher precision prediction model when the input and output of LiFePO4 battery are changed frequently.

  6. Modeling and dynamic environment analysis technology for spacecraft

    NASA Astrophysics Data System (ADS)

    Fang, Ren; Zhaohong, Qin; Zhong, Zhang; Zhenhao, Liu; Kai, Yuan; Long, Wei

    Spacecraft sustains complex and severe vibrations and acoustic environments during flight. Predicting the resulting structures, including numerical predictions of fluctuating pressure, updating models and random vibration and acoustic analysis, plays an important role during the design, manufacture and ground testing of spacecraft. In this paper, Monotony Integrative Large Eddy Simulation (MILES) is introduced to predict the fluctuating pressure of the fairing. The exact flow structures of the fairing wall surface under different Mach numbers are obtained, then a spacecraft model is constructed using the finite element method (FEM). According to the modal test data, the model is updated by the penalty method. On this basis, the random vibration and acoustic responses of the fairing and satellite are analyzed by different methods. The simulated results agree well with the experimental ones, which shows the validity of the modeling and dynamic environment analysis technology. This information can better support test planning, defining test conditions and designing optimal structures.

  7. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  8. Models of the Behavior of People Searching the Internet: A Petri Net Approach.

    ERIC Educational Resources Information Center

    Kantor, Paul B.; Nordlie, Ragnar

    1999-01-01

    Illustrates how various key abstractions of information finding, such as document relevance, a desired number of relevant documents, discouragement, exhaustion, and satisfaction can be modeled using the Petri Net framework. Shows that this model leads naturally to a new approach to collection of user data, and to analysis of transaction logs.…

  9. Use of Words and Visuals in Modelling Context of Annual Plant

    ERIC Educational Resources Information Center

    Park, Jungeun; DiNapoli, Joseph; Mixell, Robert A.; Flores, Alfinio

    2017-01-01

    This study looks at the various verbal and non-verbal representations used in a process of modelling the number of annual plants over time. Analysis focuses on how various representations such as words, diagrams, letters and mathematical equations evolve in the mathematization process of the modelling context. Our results show that (1) visual…

  10. Study of Aerothermodynamic Modeling Issues Relevant to High-Speed Sample Return Vehicles

    NASA Technical Reports Server (NTRS)

    Johnston, Christopher O.

    2014-01-01

    This paper examines the application of state-of-the-art coupled ablation and radiation simulations to highspeed sample return vehicles, such as those returning from Mars or an asteroid. A defining characteristic of these entries is that the surface recession rates and temperatures are driven by nonequilibrium convective and radiative heating through a boundary layer with significant surface blowing and ablation products. Measurements relevant to validating the simulation of these phenomena are reviewed and the Stardust entry is identified as providing the best relevant measurements. A coupled ablation and radiation flowfield analysis is presented that implements a finite-rate surface chemistry model. Comparisons between this finite-rate model and a equilibrium ablation model show that, while good agreement is seen for diffusion-limited oxidation cases, the finite-rate model predicts up to 50% lower char rates than the equilibrium model at sublimation conditions. Both the equilibrium and finite rate models predict significant negative mass flux at the surface due to sublimation of atomic carbon. A sensitivity analysis to flowfield and surface chemistry rates show that, for a sample return capsule at 10, 12, and 14 km/s, the sublimation rates for C and C3 provide the largest changes to the convective flux, radiative flux, and char rate. A parametric uncertainty analysis of the radiative heating due to radiation modeling parameters indicates uncertainties ranging from 27% at 10 km/s to 36% at 14 km/s. Applying the developed coupled analysis to the Stardust entry results in temperatures within 10% of those inferred from observations, and final recession values within 20% of measurements, which improves upon the 60% over-prediction at the stagnation point obtained through an uncoupled analysis. Emission from CN Violet is shown to be over-predicted by nearly and order-of-magnitude, which is consistent with the results of previous independent analyses. Finally, the coupled analysis is applied to a 14 km/s Earth entry representative of a Mars sample return. Although the radiative heating provides a larger fraction of the total heating, the influence of ablation and radiation on the flowfield are shown to be similar to Stardust.

  11. Process-level improvements in CMIP5 models and their impact on tropical variability, the Southern Ocean, and monsoons

    NASA Astrophysics Data System (ADS)

    Lauer, Axel; Jones, Colin; Eyring, Veronika; Evaldsson, Martin; Hagemann, Stefan; Mäkelä, Jarmo; Martin, Gill; Roehrig, Romain; Wang, Shiyu

    2018-01-01

    The performance of updated versions of the four earth system models (ESMs) CNRM, EC-Earth, HadGEM, and MPI-ESM is assessed in comparison to their predecessor versions used in Phase 5 of the Coupled Model Intercomparison Project. The Earth System Model Evaluation Tool (ESMValTool) is applied to evaluate selected climate phenomena in the models against observations. This is the first systematic application of the ESMValTool to assess and document the progress made during an extensive model development and improvement project. This study focuses on the South Asian monsoon (SAM) and the West African monsoon (WAM), the coupled equatorial climate, and Southern Ocean clouds and radiation, which are known to exhibit systematic biases in present-day ESMs. The analysis shows that the tropical precipitation in three out of four models is clearly improved. Two of three updated coupled models show an improved representation of tropical sea surface temperatures with one coupled model not exhibiting a double Intertropical Convergence Zone (ITCZ). Simulated cloud amounts and cloud-radiation interactions are improved over the Southern Ocean. Improvements are also seen in the simulation of the SAM and WAM, although systematic biases remain in regional details and the timing of monsoon rainfall. Analysis of simulations with EC-Earth at different horizontal resolutions from T159 up to T1279 shows that the synoptic-scale variability in precipitation over the SAM and WAM regions improves with higher model resolution. The results suggest that the reasonably good agreement of modeled and observed mean WAM and SAM rainfall in lower-resolution models may be a result of unrealistic intensity distributions.

  12. Refinement and testing of analysis nudging in MPAS-A ...

    EPA Pesticide Factsheets

    The Model for Prediction Across Scales - Atmosphere (MPAS-A) is being adapted to serve as the meteorological driver for EPA’s “next-generation” air-quality model. To serve that purpose, it must be able to function in a diagnostic mode where past meteorological conditions are represented in greater detail and accuracy than can be provided by available observational data and meteorological reanalysis products. MPAS-A has been modified to allow four dimensional data assimilation (FDDA) by the nudging of temperature, humidity and wind toward target values predefined on the MPAS-A computational mesh. The technique of “analysis nudging” developed for the Penn State / NCAR Mesoscale Model – Version 4 (MM4), and later applied in the Weather Research and Forecasting model (WRF), is applied here in MPAS-A with adaptations for the unstructured Voronoi mesh used in MPAS-A. Test simulations for the periods of January and July 2013, with and without FDDA, are compared to target fields at various vertical levels and to surface-level meteorological observations. The results show the ability to follow target fields with high fidelity while still maintaining conservation of mass as in the original model. The results also show model errors relative to observations continue to be constrained throughout the simulations using FDDA and even show some error reduction during the first few days that could be attributable to the finer resolution of the 92-25 km computa

  13. Cost-effectiveness Analysis in R Using a Multi-state Modeling Survival Analysis Framework: A Tutorial.

    PubMed

    Williams, Claire; Lewsey, James D; Briggs, Andrew H; Mackay, Daniel F

    2017-05-01

    This tutorial provides a step-by-step guide to performing cost-effectiveness analysis using a multi-state modeling approach. Alongside the tutorial, we provide easy-to-use functions in the statistics package R. We argue that this multi-state modeling approach using a package such as R has advantages over approaches where models are built in a spreadsheet package. In particular, using a syntax-based approach means there is a written record of what was done and the calculations are transparent. Reproducing the analysis is straightforward as the syntax just needs to be run again. The approach can be thought of as an alternative way to build a Markov decision-analytic model, which also has the option to use a state-arrival extended approach. In the state-arrival extended multi-state model, a covariate that represents patients' history is included, allowing the Markov property to be tested. We illustrate the building of multi-state survival models, making predictions from the models and assessing fits. We then proceed to perform a cost-effectiveness analysis, including deterministic and probabilistic sensitivity analyses. Finally, we show how to create 2 common methods of visualizing the results-namely, cost-effectiveness planes and cost-effectiveness acceptability curves. The analysis is implemented entirely within R. It is based on adaptions to functions in the existing R package mstate to accommodate parametric multi-state modeling that facilitates extrapolation of survival curves.

  14. Sparse representation based biomarker selection for schizophrenia with integrated analysis of fMRI and SNPs.

    PubMed

    Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping

    2014-11-15

    Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different natures of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759,075 SNPs and 153,594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for uni-variant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A Model and Simple Iterative Algorithm for Redundancy Analysis.

    ERIC Educational Resources Information Center

    Fornell, Claes; And Others

    1988-01-01

    This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)

  16. Spatial Analysis of Biomass Supply: Economic and Environmental Impacts

    USDA-ARS?s Scientific Manuscript database

    The EPIC simulation model is used with SSURGO soils, field location information, and a transportation cost model to analyze potential biomass supply for a West Central MN bioenergy plant. The simulation shows the relationship between biomass price, locations of where biomass production is profitable...

  17. Validity of the Eating Attitude Test among Exercisers.

    PubMed

    Lane, Helen J; Lane, Andrew M; Matheson, Hilary

    2004-12-01

    Theory testing and construct measurement are inextricably linked. To date, no published research has looked at the factorial validity of an existing eating attitude inventory for use with exercisers. The Eating Attitude Test (EAT) is a 26-item measure that yields a single index of disordered eating attitudes. The original factor analysis showed three interrelated factors: Dieting behavior (13-items), oral control (7-items), and bulimia nervosa-food preoccupation (6-items). The primary purpose of the study was to examine the factorial validity of the EAT among a sample of exercisers. The second purpose was to investigate relationships between eating attitudes scores and selected psychological constructs. In stage one, 598 regular exercisers completed the EAT. Confirmatory factor analysis (CFA) was used to test the single-factor, a three-factor model, and a four-factor model, which distinguished bulimia from food pre-occupation. CFA of the single-factor model (RCFI = 0.66, RMSEA = 0.10), the three-factor-model (RCFI = 0.74; RMSEA = 0.09) showed poor model fit. There was marginal fit for the 4-factor model (RCFI = 0.91, RMSEA = 0.06). Results indicated five-items showed poor factor loadings. After these 5-items were discarded, the three models were re-analyzed. CFA results indicated that the single-factor model (RCFI = 0.76, RMSEA = 0.10) and three-factor model (RCFI = 0.82, RMSEA = 0.08) showed poor fit. CFA results for the four-factor model showed acceptable fit indices (RCFI = 0.98, RMSEA = 0.06). Stage two explored relationships between EAT scores, mood, self-esteem, and motivational indices toward exercise in terms of self-determination, enjoyment and competence. Correlation results indicated that depressed mood scores positively correlated with bulimia and dieting scores. Further, dieting was inversely related with self-determination toward exercising. Collectively, findings suggest that a 21-item four-factor model shows promising validity coefficients among exercise participants, and that future research is needed to investigate eating attitudes among samples of exercisers. Key PointsValidity of psychometric measures should be thoroughly investigated. Researchers should not assume that a scale validation on one sample will show the same validity coefficients in a different population.The Eating Attitude Test is a commonly used scale. The present study shows a revised 21-item scale was suitable for exercisers.Researchers using the Eating Attitude Test should use subscales of Dieting, Oral control, Food pre-occupation, and Bulimia.Future research should involve qualitative techniques and interview exercise participants to explore the nature of eating attitudes.

  18. Thai students' mental model of chemical bonding

    NASA Astrophysics Data System (ADS)

    Sarawan, Supawadee; Yuenyong, Chokchai

    2018-01-01

    This Research was finding the viewing about concept of chemical bonding is fundamental to subsequent learning of various other topics related to this concept in chemistry. Any conceptions about atomic structures that students have will be shown their further learning. The purpose of this study is to interviews conceptions held by high school chemistry students about metallic bonding and to reveal mental model of atomic structures show according to the educational level. With this aim, the questionnaire prepared making use of the literature and administered for analysis about mental model of chemical bonding. It was determined from the analysis of answers of questionnaire the 10th grade, 11th grade and 12th grade students. Finally, each was shown prompts in the form of focus cards derived from curriculum material that showed ways in which the bonding in specific metallic substances had been depicted. Students' responses revealed that learners across all three levels prefer simple, realistic mental models for metallic bonding and reveal to chemical bonding.

  19. Generating a Multiphase Equation of State with Swarm Intelligence

    NASA Astrophysics Data System (ADS)

    Cox, Geoffrey

    2017-06-01

    Hydrocode calculations require knowledge of the variation of pressure of a material with density and temperature, which is given by the equation of state. An accurate model needs to account for discontinuities in energy, density and properties of a material across a phase boundary. When generating a multiphase equation of state the modeller attempts to balance the agreement between the available data for compression, expansion and phase boundary location. However, this can prove difficult because minor adjustments in the equation of state for a single phase can have a large impact on the overall phase diagram. Recently, Cox and Christie described a method for combining statistical-mechanics-based condensed matter physics models with a stochastic analysis technique called particle swarm optimisation. The models produced show good agreement with experiment over a wide range of pressure-temperature space. This talk details the general implementation of this technique, shows example results, and describes the types of analysis that can be performed with this method.

  20. Complex networks untangle competitive advantage in Australian football

    NASA Astrophysics Data System (ADS)

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  1. Complex networks untangle competitive advantage in Australian football.

    PubMed

    Braham, Calum; Small, Michael

    2018-05-01

    We construct player-based complex network models of Australian football teams for the 2014 Australian Football League season; modelling the passes between players as weighted, directed edges. We show that analysis of these measures can give an insight into the underlying structure and strategy of Australian football teams, quantitatively distinguishing different playing styles. The relationships observed between network properties and match outcomes suggest that successful teams exhibit well-connected passing networks with the passes distributed between all 22 players as evenly as possible. Linear regression models of team scores and match margins show significant improvements in R 2 and Bayesian information criterion when network measures are added to models that use conventional measures, demonstrating that network analysis measures contain useful, extra information. Several measures, particularly the mean betweenness centrality, are shown to be useful in predicting the outcomes of future matches, suggesting they measure some aspect of the intrinsic strength of teams. In addition, several local centrality measures are shown to be useful in analysing individual players' differing contributions to the team's structure.

  2. Interacting dark energy: Dynamical system analysis

    NASA Astrophysics Data System (ADS)

    Golchin, Hanif; Jamali, Sara; Ebrahimi, Esmaeil

    We investigate the impacts of interaction between dark matter (DM) and dark energy (DE) in the context of two DE models, holographic (HDE) and ghost dark energy (GDE). In fact, using the dynamical system analysis, we obtain the cosmological consequence of several interactions, considering all relevant component of universe, i.e. matter (dark and luminous), radiation and DE. Studying the phase space for all interactions in detail, we show the existence of unstable matter-dominated and stable DE-dominated phases. We also show that linear interactions suffer from the absence of standard radiation-dominated epoch. Interestingly, this failure resolved by adding the nonlinear interactions to the models. We find an upper bound for the value of the coupling constant of the interaction between DM and DE as b < 0.57in the case of holographic model, and b < 0.61 in the case of GDE model, to result in a cosmological viable matter-dominated epoch. More specifically, this bound is vital to satisfy instability and deceleration of matter-dominated epoch.

  3. Global sensitivity analysis of the BSM2 dynamic influent disturbance scenario generator.

    PubMed

    Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf

    2012-01-01

    This paper presents the results of a global sensitivity analysis (GSA) of a phenomenological model that generates dynamic wastewater treatment plant (WWTP) influent disturbance scenarios. This influent model is part of the Benchmark Simulation Model (BSM) family and creates realistic dry/wet weather files describing diurnal, weekend and seasonal variations through the combination of different generic model blocks, i.e. households, industry, rainfall and infiltration. The GSA is carried out by combining Monte Carlo simulations and standardized regression coefficients (SRC). Cluster analysis is then applied, classifying the influence of the model parameters into strong, medium and weak. The results show that the method is able to decompose the variance of the model predictions (R(2)> 0.9) satisfactorily, thus identifying the model parameters with strongest impact on several flow rate descriptors calculated at different time resolutions. Catchment size (PE) and the production of wastewater per person equivalent (QperPE) are two parameters that strongly influence the yearly average dry weather flow rate and its variability. Wet weather conditions are mainly affected by three parameters: (1) the probability of occurrence of a rain event (Llrain); (2) the catchment size, incorporated in the model as a parameter representing the conversion from mm rain · day(-1) to m(3) · day(-1) (Qpermm); and, (3) the quantity of rain falling on permeable areas (aH). The case study also shows that in both dry and wet weather conditions the SRC ranking changes when the time scale of the analysis is modified, thus demonstrating the potential to identify the effect of the model parameters on the fast/medium/slow dynamics of the flow rate. The paper ends with a discussion on the interpretation of GSA results and of the advantages of using synthetic dynamic flow rate data for WWTP influent scenario generation. This section also includes general suggestions on how to use the proposed methodology to any influent generator to adapt the created time series to a modeller's demands.

  4. Improving agricultural straw preparation logistics stream in bio-methane production: experimental studies and application analysis.

    PubMed

    Tao, Luo; Junting, Pan; Xi, Meng; Hailong, Huang; Yan, Long; Xia, Xiong; Ruyi, Huang; Zili, Mei

    2017-10-01

    Long-term production in commercial straw biogas plants has been rare in China due to inefficiencies in the logistics stream. Biomass densification could be a potential solution to this issue. Therefore, we conducted a study to evaluate whether biomass densification is a more efficient and sustainable option. We performed methane production experiments to investigate fermentation characteristics of briquettes (with a new pretreatment, model II) and rubs (with a common pretreatment, model I). A 3000-m 3 biogas plant was used to conduct a comparative analysis with solar eMergy joules. Results showed that the methane yield of briquettes of corn stover was 66.74% higher than that of rubs, and the briquettes had better digestion performance in terms of CH 4 content, VFA, and alcohol. The two models required almost the same eMergy investment input, while model II obtained a greater quantity of net eMergy (16.5% higher) in comparison with model I. The net eMergy yield ratio (EYR) (biogas only) of model I and model II was 0.99 and 1.67, respectively, showing less market competitiveness for commercial operations with model I. Meanwhile, the logistic costs of model II could be reduced to approximately US $34,514 annually.

  5. Multidimensional fatigue inventory and post-polio syndrome - a Rasch analysis.

    PubMed

    Dencker, Anna; Sunnerhagen, Katharina S; Taft, Charles; Lundgren-Nilsson, Åsa

    2015-02-12

    Fatigue is a common symptom in post-polio syndrome (PPS) and can have a substantial impact on patients. There is a need for validated questionnaires to assess fatigue in PPS for use in clinical practice and research. The aim with this study was to assess the validity and reliability of the Swedish version of Multidimensional Fatigue Inventory (MFI-20) in patients with PPS using the Rasch model. A total of 231 patients diagnosed with PPS completed the Swedish MFI-20 questionnaire at post-polio out-patient clinics in Sweden. The mean age of participants was 62 years and 61% were females. Data were tested against assumptions of the Rasch measurement model (i.e. unidimensionality of the scale, good item fit, independency of items and absence of differential item functioning). Reliability was tested with the person separation index (PSI). A transformation of the ordinal total scale scores into an interval scale for use in parametric analysis was performed. Dummy cases with minimum and maximum scoring were used for the transformation table to achieve interval scores between 20 and 100, which are comprehensive limits for the MFI-20 scale. An initial Rasch analysis of the full scale with 20 items showed misfit to the Rasch model (p < 0.001). Seven items showed slightly disordered thresholds and person estimates were not significantly improved by rescoring items. Analysis of MFI-20 scale with the 5 MFI-20 subscales as testlets showed good fit with a non-significant x (2) value (p = 0.089). PSI for the testlet solution was 0.86. Local dependency was present in all subscales and fit to the Rasch model was solved with testlets within each subscale. PSI ranged from 0.52 to 0.82 in the subscales. This study shows that the Swedish MFI-20 total scale and subscale scores yield valid and reliable measures of fatigue in persons with post-polio syndrome. The Rasch transformed total scores can be used for parametric statistical analyses in future clinical studies.

  6. Associations between polymorphisms of the ADIPOQ gene and hypertension risk: a systematic and meta-analysis

    PubMed Central

    Fan, Weina; Qu, Xiaowei; Li, Jing; Wang, Xingning; Bai, Yanping; Cao, Qingmei; Ma, Liqun; Zhou, Xiaoyao; Zhu, Wei; Liu, Wei; Ma, Qiang

    2017-01-01

    ADIPOQ gene polymorphisms have been indicated to be associated with hypertension; however, published studies have reported inconsistent results. Eligible studies were retrieved by searching the PubMed, Embase and China National Knowledge Infrastructure databases. The case group consisted of patients with hypertension, and the control group consisted of subjects with normal blood pressure. Based on eleven published articles, involving 4837 cases and 5618 controls, the pooled results from rs2241766 polymorphism showed increased risk in the allelic model (G VS T: OR = 1.16, 95%CI = 1.06–1.27), recessive model (GG VS GT + TT: OR = 1.34, 95%CI = 1.10–1.63), dominant model (GG + GT VS TT: OR = 1.15, 95%CI = 1.02–1.30) and homozygote model (GG VS TT: OR = 1.38, 95%CI = 1.21–1.69). In addition, rs266729 polymorphism showed increased risk for hypertension in the recessive model (GG VS GC + CC: OR = 1.43, 95%CI = 1.02–2.01). In the Caucasian subgroup, rs1501299 polymorphism showed decreased risk of hypertension in the allelic model (T VS G: OR = 0.75, 95%CI = 0.58–0.97), dominant model (TT + TG VS GG: OR = 0.83, 95%CI = 0.71–0.98) and heterozygote model (TG VS GG: OR = 0.82, 95%CI = 0.68–0.99). The rs2241766 polymorphism was associated with a significant increase in hypertension risk based on our analysis. Moreover, an increased risk of rs266729 in hypertension patients was also detected. Our meta-analysis suggests that the rs1501299 polymorphism may play a protective role in hypertension in Caucasian subgroup; however, this finding requires further study. PMID:28181566

  7. Pedigree data analysis with crossover interference.

    PubMed Central

    Browning, Sharon

    2003-01-01

    We propose a new method for calculating probabilities for pedigree genetic data that incorporates crossover interference using the chi-square models. Applications include relationship inference, genetic map construction, and linkage analysis. The method is based on importance sampling of unobserved inheritance patterns conditional on the observed genotype data and takes advantage of fast algorithms for no-interference models while using reweighting to allow for interference. We show that the method is effective for arbitrarily many markers with small pedigrees. PMID:12930760

  8. Bifurcation analysis in SIR epidemic model with treatment

    NASA Astrophysics Data System (ADS)

    Balamuralitharan, S.; Radha, M.

    2018-04-01

    We investigated the bifurcation analysis of nonlinear system of SIR epidemic model with treatment. It is accepted that the treatment is corresponding to the quantity of infective which is below the limit and steady when the quantity of infective achieves the limit. We analyze about the Transcritical bifurcation which occurs at the disease free equilibrium point and Hopf bifurcation which occurs at endemic equilibrium point. Using MATLAB we show the picture of bifurcation at the disease free equilibrium point.

  9. Adaptive cancellation of motion artifact in wearable biosensors.

    PubMed

    Yousefi, Rasoul; Nourani, Mehrdad; Panahi, Issa

    2012-01-01

    The performance of wearable biosensors is highly influenced by motion artifact. In this paper, a model is proposed for analysis of motion artifact in wearable photoplethysmography (PPG) sensors. Using this model, we proposed a robust real-time technique to estimate fundamental frequency and generate a noise reference signal. A Least Mean Square (LMS) adaptive noise canceler is then designed and validated using our synthetic noise generator. The analysis and results on proposed technique for noise cancellation shows promising performance.

  10. Two Paradoxes in Linear Regression Analysis

    PubMed Central

    FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong

    2016-01-01

    Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214

  11. Cost utility of maintenance treatment of recurrent depression with sertraline versus episodic treatment with dothiepin.

    PubMed

    Hatziandreu, E J; Brown, R E; Revicki, D A; Turner, R; Martindale, J; Levine, S; Siegel, J E

    1994-03-01

    The objective of this study was to model, for patients at risk of recurrent depression, the cost-utility of maintenance therapy with sertraline compared with treatment of acute episodes with dothiepin ('episodic treatment'). Using clinical decision analysis techniques, a Markov state-transition model was constructed to estimate the lifetime costs and quality-adjusted life-years (QALYs) of the 2 therapeutic strategies. The model follows 2 cohorts of 35-year-old women at high risk for recurrent depression over their lifetimes. Model construction and relevant data (probabilities) for performing the analysis were based on existing clinical knowledge. Two physician panels were used to obtain estimates of recurrence probabilities not available in the literature, health utilities, and resource consumption. Costs were obtained from published sources. The baseline analysis showed that it costs 2172 British pounds sterling ($US3692, 1991 currency) to save an additional QALY with sertraline maintenance treatment. Sensitivity analysis showed that the incremental cost-utility ratio ranged from 557 British pounds sterling to 5260 British pounds sterling per QALY. Overall, the resulting ratios are considered to be well within the range of cost-utility ratios that support the adoption and appropriate utilisation of a technology. Based on the study assumptions, long term maintenance treatment with sertraline appears to be clinically and economically justified choice for patients at high risk of recurrent depression.

  12. An Efficient Analysis Methodology for Fluted-Core Composite Structures

    NASA Technical Reports Server (NTRS)

    Oremont, Leonard; Schultz, Marc R.

    2012-01-01

    The primary loading condition in launch-vehicle barrel sections is axial compression, and it is therefore important to understand the compression behavior of any structures, structural concepts, and materials considered in launch-vehicle designs. This understanding will necessarily come from a combination of test and analysis. However, certain potentially beneficial structures and structural concepts do not lend themselves to commonly used simplified analysis methods, and therefore innovative analysis methodologies must be developed if these structures and structural concepts are to be considered. This paper discusses such an analysis technique for the fluted-core sandwich composite structural concept. The presented technique is based on commercially available finite-element codes, and uses shell elements to capture behavior that would normally require solid elements to capture the detailed mechanical response of the structure. The shell thicknesses and offsets using this analysis technique are parameterized, and the parameters are adjusted through a heuristic procedure until this model matches the mechanical behavior of a more detailed shell-and-solid model. Additionally, the detailed shell-and-solid model can be strategically placed in a larger, global shell-only model to capture important local behavior. Comparisons between shell-only models, experiments, and more detailed shell-and-solid models show excellent agreement. The discussed analysis methodology, though only discussed in the context of fluted-core composites, is widely applicable to other concepts.

  13. Interaction of Soil Heavy Metal Pollution with Industrialisation and the Landscape Pattern in Taiyuan City, China

    PubMed Central

    Liu, Yong; Su, Chao; Zhang, Hong; Li, Xiaoting; Pei, Jingfei

    2014-01-01

    Many studies indicated that industrialization and urbanization caused serious soil heavy metal pollution from industrialized age. However, fewer previous studies have conducted a combined analysis of the landscape pattern, urbanization, industrialization, and heavy metal pollution. This paper was aimed at exploring the relationships of heavy metals in the soil (Pb, Cu, Ni, As, Cd, Cr, Hg, and Zn) with landscape pattern, industrialisation, urbanisation in Taiyuan city using multivariate analysis. The multivariate analysis included correlation analysis, analysis of variance (ANOVA), independent-sample T test, and principal component analysis (PCA). Geographic information system (GIS) was also applied to determine the spatial distribution of the heavy metals. The spatial distribution maps showed that the heavy metal pollution of the soil was more serious in the centre of the study area. The results of the multivariate analysis indicated that the correlations among heavy metals were significant, and industrialisation could significantly affect the concentrations of some heavy metals. Landscape diversity showed a significant negative correlation with the heavy metal concentrations. The PCA showed that a two-factor model for heavy metal pollution, industrialisation, and the landscape pattern could effectively demonstrate the relationships between these variables. The model explained 86.71% of the total variance of the data. Moreover, the first factor was mainly loaded with the comprehensive pollution index (P), and the second factor was primarily loaded with landscape diversity and dominance (H and D). An ordination of 80 samples could show the pollution pattern of all the samples. The results revealed that local industrialisation caused heavy metal pollution of the soil, but such pollution could respond negatively to the landscape pattern. The results of the study could provide a basis for agricultural, suburban, and urban planning. PMID:25251460

  14. Interaction of soil heavy metal pollution with industrialisation and the landscape pattern in Taiyuan city, China.

    PubMed

    Liu, Yong; Su, Chao; Zhang, Hong; Li, Xiaoting; Pei, Jingfei

    2014-01-01

    Many studies indicated that industrialization and urbanization caused serious soil heavy metal pollution from industrialized age. However, fewer previous studies have conducted a combined analysis of the landscape pattern, urbanization, industrialization, and heavy metal pollution. This paper was aimed at exploring the relationships of heavy metals in the soil (Pb, Cu, Ni, As, Cd, Cr, Hg, and Zn) with landscape pattern, industrialisation, urbanisation in Taiyuan city using multivariate analysis. The multivariate analysis included correlation analysis, analysis of variance (ANOVA), independent-sample T test, and principal component analysis (PCA). Geographic information system (GIS) was also applied to determine the spatial distribution of the heavy metals. The spatial distribution maps showed that the heavy metal pollution of the soil was more serious in the centre of the study area. The results of the multivariate analysis indicated that the correlations among heavy metals were significant, and industrialisation could significantly affect the concentrations of some heavy metals. Landscape diversity showed a significant negative correlation with the heavy metal concentrations. The PCA showed that a two-factor model for heavy metal pollution, industrialisation, and the landscape pattern could effectively demonstrate the relationships between these variables. The model explained 86.71% of the total variance of the data. Moreover, the first factor was mainly loaded with the comprehensive pollution index (P), and the second factor was primarily loaded with landscape diversity and dominance (H and D). An ordination of 80 samples could show the pollution pattern of all the samples. The results revealed that local industrialisation caused heavy metal pollution of the soil, but such pollution could respond negatively to the landscape pattern. The results of the study could provide a basis for agricultural, suburban, and urban planning.

  15. IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.

    PubMed

    Huang, Lihan

    2017-12-04

    The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.

  16. Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis

    PubMed Central

    Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas

    2016-01-01

    The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246

  17. Analysis and test evaluation of the dynamic response and stability of three advanced turboprop models

    NASA Technical Reports Server (NTRS)

    Bansal, P. N.; Arseneaux, P. J.; Smith, A. F.; Turnberg, J. E.; Brooks, B. M.

    1985-01-01

    Results of dynamic response and stability wind tunnel tests of three 62.2 cm (24.5 in) diameter models of the Prop-Fan, advanced turboprop, are presented. Measurements of dynamic response were made with the rotors mounted on an isolated nacelle, with varying tilt for nonuniform inflow. One model was also tested using a semi-span wing and fuselage configuration for response to realistic aircraft inflow. Stability tests were performed using tunnel turbulence or a nitrogen jet for excitation. Measurements are compared with predictions made using beam analysis methods for the model with straight blades, and finite element analysis methods for the models with swept blades. Correlations between measured and predicted rotating blade natural frequencies for all the models are very good. The IP dynamic response of the straight blade model is reasonably well predicted. The IP response of the swept blades is underpredicted and the wing induced response of the straight blade is overpredicted. Two models did not flutter, as predicted. One swept blade model encountered an instability at a higher RPM than predicted, showing predictions to be conservative.

  18. Further evidence for the increased power of LOD scores compared with nonparametric methods.

    PubMed

    Durner, M; Vieland, V J; Greenberg, D A

    1999-01-01

    In genetic analysis of diseases in which the underlying model is unknown, "model free" methods-such as affected sib pair (ASP) tests-are often preferred over LOD-score methods, although LOD-score methods under the correct or even approximately correct model are more powerful than ASP tests. However, there might be circumstances in which nonparametric methods will outperform LOD-score methods. Recently, Dizier et al. reported that, in some complex two-locus (2L) models, LOD-score methods with segregation analysis-derived parameters had less power to detect linkage than ASP tests. We investigated whether these particular models, in fact, represent a situation that ASP tests are more powerful than LOD scores. We simulated data according to the parameters specified by Dizier et al. and analyzed the data by using a (a) single locus (SL) LOD-score analysis performed twice, under a simple dominant and a recessive mode of inheritance (MOI), (b) ASP methods, and (c) nonparametric linkage (NPL) analysis. We show that SL analysis performed twice and corrected for the type I-error increase due to multiple testing yields almost as much linkage information as does an analysis under the correct 2L model and is more powerful than either the ASP method or the NPL method. We demonstrate that, even for complex genetic models, the most important condition for linkage analysis is that the assumed MOI at the disease locus being tested is approximately correct, not that the inheritance of the disease per se is correctly specified. In the analysis by Dizier et al., segregation analysis led to estimates of dominance parameters that were grossly misspecified for the locus tested in those models in which ASP tests appeared to be more powerful than LOD-score analyses.

  19. A meta-model analysis of a finite element simulation for defining poroelastic properties of intervertebral discs.

    PubMed

    Nikkhoo, Mohammad; Hsu, Yu-Chun; Haghpanahi, Mohammad; Parnianpour, Mohamad; Wang, Jaw-Lin

    2013-06-01

    Finite element analysis is an effective tool to evaluate the material properties of living tissue. For an interactive optimization procedure, the finite element analysis usually needs many simulations to reach a reasonable solution. The meta-model analysis of finite element simulation can be used to reduce the computation of a structure with complex geometry or a material with composite constitutive equations. The intervertebral disc is a complex, heterogeneous, and hydrated porous structure. A poroelastic finite element model can be used to observe the fluid transferring, pressure deviation, and other properties within the disc. Defining reasonable poroelastic material properties of the anulus fibrosus and nucleus pulposus is critical for the quality of the simulation. We developed a material property updating protocol, which is basically a fitting algorithm consisted of finite element simulations and a quadratic response surface regression. This protocol was used to find the material properties, such as the hydraulic permeability, elastic modulus, and Poisson's ratio, of intact and degenerated porcine discs. The results showed that the in vitro disc experimental deformations were well fitted with limited finite element simulations and a quadratic response surface regression. The comparison of material properties of intact and degenerated discs showed that the hydraulic permeability significantly decreased but Poisson's ratio significantly increased for the degenerated discs. This study shows that the developed protocol is efficient and effective in defining material properties of a complex structure such as the intervertebral disc.

  20. Control for Population Structure and Relatedness for Binary Traits in Genetic Association Studies via Logistic Mixed Models.

    PubMed

    Chen, Han; Wang, Chaolong; Conomos, Matthew P; Stilp, Adrienne M; Li, Zilin; Sofer, Tamar; Szpiro, Adam A; Chen, Wei; Brehm, John M; Celedón, Juan C; Redline, Susan; Papanicolaou, George J; Thornton, Timothy A; Laurie, Cathy C; Rice, Kenneth; Lin, Xihong

    2016-04-07

    Linear mixed models (LMMs) are widely used in genome-wide association studies (GWASs) to account for population structure and relatedness, for both continuous and binary traits. Motivated by the failure of LMMs to control type I errors in a GWAS of asthma, a binary trait, we show that LMMs are generally inappropriate for analyzing binary traits when population stratification leads to violation of the LMM's constant-residual variance assumption. To overcome this problem, we develop a computationally efficient logistic mixed model approach for genome-wide analysis of binary traits, the generalized linear mixed model association test (GMMAT). This approach fits a logistic mixed model once per GWAS and performs score tests under the null hypothesis of no association between a binary trait and individual genetic variants. We show in simulation studies and real data analysis that GMMAT effectively controls for population structure and relatedness when analyzing binary traits in a wide variety of study designs. Copyright © 2016 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  1. Potential fitting biases resulting from grouping data into variable width bins

    NASA Astrophysics Data System (ADS)

    Towers, S.

    2014-07-01

    When reading peer-reviewed scientific literature describing any analysis of empirical data, it is natural and correct to proceed with the underlying assumption that experiments have made good faith efforts to ensure that their analyses yield unbiased results. However, particle physics experiments are expensive and time consuming to carry out, thus if an analysis has inherent bias (even if unintentional), much money and effort can be wasted trying to replicate or understand the results, particularly if the analysis is fundamental to our understanding of the universe. In this note we discuss the significant biases that can result from data binning schemes. As we will show, if data are binned such that they provide the best comparison to a particular (but incorrect) model, the resulting model parameter estimates when fitting to the binned data can be significantly biased, leading us to too often accept the model hypothesis when it is not in fact true. When using binned likelihood or least squares methods there is of course no a priori requirement that data bin sizes need to be constant, but we show that fitting to data grouped into variable width bins is particularly prone to produce biased results if the bin boundaries are chosen to optimize the comparison of the binned data to a wrong model. The degree of bias that can be achieved simply with variable binning can be surprisingly large. Fitting the data with an unbinned likelihood method, when possible to do so, is the best way for researchers to show that their analyses are not biased by binning effects. Failing that, equal bin widths should be employed as a cross-check of the fitting analysis whenever possible.

  2. Identification of anisodamine tablets by Raman and near-infrared spectroscopy with chemometrics.

    PubMed

    Li, Lian; Zang, Hengchang; Li, Jun; Chen, Dejun; Li, Tao; Wang, Fengshan

    2014-06-05

    Vibrational spectroscopy including Raman and near-infrared (NIR) spectroscopy has become an attractive tool for pharmaceutical analysis. In this study, effective calibration models for the identification of anisodamine tablet and its counterfeit and the distinguishment of manufacturing plants, based on Raman and NIR spectroscopy, were built, respectively. Anisodamine counterfeit tablets were identified by Raman spectroscopy with correlation coefficient method, and the results showed that the predictive accuracy was 100%. The genuine anisodamine tablets from 5 different manufacturing plants were distinguished by NIR spectroscopy using partial least squares discriminant analysis (PLS-DA) models based on interval principal component analysis (iPCA) method. And the results showed the recognition rate and rejection rate were 100% respectively. In conclusion, Raman spectroscopy and NIR spectroscopy combined with chemometrics are feasible and potential tools for rapid pharmaceutical tablet discrimination. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. A spatiotemporal analysis of U.S. station temperature trends over the last century

    NASA Astrophysics Data System (ADS)

    Capparelli, V.; Franzke, C.; Vecchio, A.; Freeman, M. P.; Watkins, N. W.; Carbone, V.

    2013-07-01

    This study presents a nonlinear spatiotemporal analysis of 1167 station temperature records from the United States Historical Climatology Network covering the period from 1898 through 2008. We use the empirical mode decomposition method to extract the generally nonlinear trends of each station. The statistical significance of each trend is assessed against three null models of the background climate variability, represented by stochastic processes of increasing temporal correlation length. We find strong evidence that more than 50% of all stations experienced a significant trend over the last century with respect to all three null models. A spatiotemporal analysis reveals a significant cooling trend in the South-East and significant warming trends in the rest of the contiguous U.S. It also shows that the warming trend appears to have migrated equatorward. This shows the complex spatiotemporal evolution of climate change at local scales.

  4. [Cost-effectiveness and cost-benefit analysis on strategy for preventing mother-to-child transmission of hepatitis B virus].

    PubMed

    Cai, Y L; Zhang, S X; Yang, P C; Lin, Y

    2016-06-01

    Through cost-benefit analysis (CBA), cost-effectiveness analysis (CEA) and quantitative optimization analysis to understand the economic benefit and outcomes of strategy regarding preventing mother-to-child transmission (PMTCT) on hepatitis B virus. Based on the principle of Hepatitis B immunization decision analytic-Markov model, strategies on PMTCT and universal vaccination were compared. Related parameters of Shenzhen were introduced to the model, a birth cohort was set up as the study population in 2013. The net present value (NPV), benefit-cost ratio (BCR), incremental cost-effectiveness ratio (ICER) were calculated and the differences between CBA and CEA were compared. A decision tree was built as the decision analysis model for hepatitis B immunization. Three kinds of Markov models were used to simulate the outcomes after the implementation of vaccination program. The PMTCT strategy of Shenzhen showed a net-gain as 38 097.51 Yuan/per person in 2013, with BCR as 14.37. The universal vaccination strategy showed a net-gain as 37 083.03 Yuan/per person, with BCR as 12.07. Data showed that the PMTCT strategy was better than the universal vaccination one and would end with gaining more economic benefit. When comparing with the universal vaccination program, the PMTCT strategy would save 85 100.00 Yuan more on QALY gains for every person. The PMTCT strategy seemed more cost-effective compared with the one under universal vaccination program. In the CBA and CEA hepatitis B immunization programs, the immunization coverage rate and costs of hepatitis B related diseases were the most important influencing factors. Outcomes of joint-changes of all the parameters in CEA showed that PMTCT strategy was a more cost-effective. The PMTCT strategy gained more economic benefit and effects on health. However, the cost of PMTCT strategy was more than the universal vaccination program, thus it is important to pay attention to the process of PMTCT strategy and the universal vaccination program. CBA seemed suitable for strategy optimization while CEA was better for strategy evaluation. Hopefully, programs as combination of the above said two methods would facilitate the process of economic evaluation.

  5. A metabolic fingerprinting approach based on selected ion flow tube mass spectrometry (SIFT-MS) and chemometrics: A reliable tool for Mediterranean origin-labeled olive oils authentication.

    PubMed

    Bajoub, Aadil; Medina-Rodríguez, Santiago; Ajal, El Amine; Cuadros-Rodríguez, Luis; Monasterio, Romina Paula; Vercammen, Joeri; Fernández-Gutiérrez, Alberto; Carrasco-Pancorbo, Alegría

    2018-04-01

    Selected Ion flow tube mass spectrometry (SIFT-MS) in combination with chemometrics was used to authenticate the geographical origin of Mediterranean virgin olive oils (VOOs) produced under geographical origin labels. In particular, 130 oil samples from six different Mediterranean regions (Kalamata (Greece); Toscana (Italy); Meknès and Tyout (Morocco); and Priego de Córdoba and Baena (Spain)) were considered. The headspace volatile fingerprints were measured by SIFT-MS in full scan with H 3 O + , NO + and O 2 + as precursor ions and the results were subjected to chemometric treatments. Principal Component Analysis (PCA) was used for preliminary multivariate data analysis and Partial Least Squares-Discriminant Analysis (PLS-DA) was applied to build different models (considering the three reagent ions) to classify samples according to the country of origin and regions (within the same country). The multi-class PLS-DA models showed very good performance in terms of fitting accuracy (98.90-100%) and prediction accuracy (96.70-100% accuracy for cross validation and 97.30-100% accuracy for external validation (test set)). Considering the two-class PLS-DA models, the one for the Spanish samples showed 100% sensitivity, specificity and accuracy in calibration, cross validation and external validation; the model for Moroccan oils also showed very satisfactory results (with perfect scores for almost every parameter in all the cases). Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Model Construction and Analysis of Respiration in Halobacterium salinarum.

    PubMed

    Talaue, Cherryl O; del Rosario, Ricardo C H; Pfeiffer, Friedhelm; Mendoza, Eduardo R; Oesterhelt, Dieter

    2016-01-01

    The archaeon Halobacterium salinarum can produce energy using three different processes, namely photosynthesis, oxidative phosphorylation and fermentation of arginine, and is thus a model organism in bioenergetics. Compared to its bacteriorhodopsin-driven photosynthesis, less attention has been devoted to modeling its respiratory pathway. We created a system of ordinary differential equations that models its oxidative phosphorylation. The model consists of the electron transport chain, the ATP synthase, the potassium uniport and the sodium-proton antiport. By fitting the model parameters to experimental data, we show that the model can explain data on proton motive force generation, ATP production, and the charge balancing of ions between the sodium-proton antiporter and the potassium uniport. We performed sensitivity analysis of the model parameters to determine how the model will respond to perturbations in parameter values. The model and the parameters we derived provide a resource that can be used for analytical studies of the bioenergetics of H. salinarum.

  7. Fusion of spectral models for dynamic modeling of sEMG and skeletal muscle force.

    PubMed

    Potluri, Chandrasekhar; Anugolu, Madhavi; Chiu, Steve; Urfer, Alex; Schoen, Marco P; Naidu, D Subbaram

    2012-01-01

    In this paper, we present a method of combining spectral models using a Kullback Information Criterion (KIC) data fusion algorithm. Surface Electromyographic (sEMG) signals and their corresponding skeletal muscle force signals are acquired from three sensors and pre-processed using a Half-Gaussian filter and a Chebyshev Type- II filter, respectively. Spectral models - Spectral Analysis (SPA), Empirical Transfer Function Estimate (ETFE), Spectral Analysis with Frequency Dependent Resolution (SPFRD) - are extracted from sEMG signals as input and skeletal muscle force as output signal. These signals are then employed in a System Identification (SI) routine to establish the dynamic models relating the input and output. After the individual models are extracted, the models are fused by a probability based KIC fusion algorithm. The results show that the SPFRD spectral models perform better than SPA and ETFE models in modeling the frequency content of the sEMG/skeletal muscle force data.

  8. Neural system modeling and simulation using Hybrid Functional Petri Net.

    PubMed

    Tang, Yin; Wang, Fei

    2012-02-01

    The Petri net formalism has been proved to be powerful in biological modeling. It not only boasts of a most intuitive graphical presentation but also combines the methods of classical systems biology with the discrete modeling technique. Hybrid Functional Petri Net (HFPN) was proposed specially for biological system modeling. An array of well-constructed biological models using HFPN yielded very interesting results. In this paper, we propose a method to represent neural system behavior, where biochemistry and electrical chemistry are both included using the Petri net formalism. We built a model for the adrenergic system using HFPN and employed quantitative analysis. Our simulation results match the biological data well, showing that the model is very effective. Predictions made on our model further manifest the modeling power of HFPN and improve the understanding of the adrenergic system. The file of our model and more results with their analysis are available in our supplementary material.

  9. A complete graphical criterion for the adjustment formula in mediation analysis.

    PubMed

    Shpitser, Ilya; VanderWeele, Tyler J

    2011-03-04

    Various assumptions have been used in the literature to identify natural direct and indirect effects in mediation analysis. These effects are of interest because they allow for effect decomposition of a total effect into a direct and indirect effect even in the presence of interactions or non-linear models. In this paper, we consider the relation and interpretation of various identification assumptions in terms of causal diagrams interpreted as a set of non-parametric structural equations. We show that for such causal diagrams, two sets of assumptions for identification that have been described in the literature are in fact equivalent in the sense that if either set of assumptions holds for all models inducing a particular causal diagram, then the other set of assumptions will also hold for all models inducing that diagram. We moreover build on prior work concerning a complete graphical identification criterion for covariate adjustment for total effects to provide a complete graphical criterion for using covariate adjustment to identify natural direct and indirect effects. Finally, we show that this criterion is equivalent to the two sets of independence assumptions used previously for mediation analysis.

  10. Validation of multi-mission satellite altimetry for the Baltic Sea region

    NASA Astrophysics Data System (ADS)

    Kudryavtseva, Nadia; Soomere, Tarmo; Giudici, Andrea

    2016-04-01

    Currently, three sources of wave data are available for the research community, namely, buoys, modelling, and satellite altimetry. The buoy measurements provide high-quality time series of wave properties but they are deployed only in a few locations. Wave modelling covers large domains and provides good results for the open sea conditions. However, the limitation of modelling is that the results are dependent on wind quality and assumptions put into the model. Satellite altimetry in many occasions provides homogeneous data over large sea areas with an appreciable spatial and temporal resolution. The use of satellite altimetry is problematic in coastal areas and partially ice-covered water bodies. These limitations can be circumvented by careful analysis of the geometry of the basin, ice conditions and spatial coverage of each altimetry snapshot. In this poster, for the first time, we discuss a validation of 30 years of multi-mission altimetry covering the whole Baltic Sea. We analysed data from RADS database (Scharroo et al. 2013) which span from 1985 to 2015. To assess the limitations of the satellite altimeter data quality, the data were cross-matched with available wave measurements from buoys of the Swedish Meteorological and Hydrological Institute and Finnish Meteorological Institute. The altimeter-measured significant wave heights showed a very good correspondence with the wave buoys. We show that the data with backscatter coefficients more than 13.5 and high errors in significant wave heights and range should be excluded. We also examined the effect of ice cover and distance from the land on satellite altimetry measurements. The analysis of cross-matches between the satellite altimetry data and buoys' measurements shows that the data are only corrupted in the nearshore domain within 0.2 degrees from the coast. The statistical analysis showed a significant decrease in wave heights for sea areas with ice concentration more than 30 percent. We also checked and corrected the data for biases between different missions. This analysis provides a unique uniform database of satellite altimetry measurements over the whole Baltic Sea, which can be further used for finding biases in wave modelling and studies of wave climatology. The database is available upon request.

  11. Scale/Analytical Analyses of Freezing and Convective Melting with Internal Heat Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali S. Siahpush; John Crepeau; Piyush Sabharwall

    2013-07-01

    Using a scale/analytical analysis approach, we model phase change (melting) for pure materials which generate constant internal heat generation for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. The analysis also consider constant heat flux (in a cylindrical geometry).We show the time scales in which conduction and convection heat transfer dominate.

  12. Finite element fatigue analysis of rectangular clutch spring of automatic slack adjuster

    NASA Astrophysics Data System (ADS)

    Xu, Chen-jie; Luo, Zai; Hu, Xiao-feng; Jiang, Wen-song

    2015-02-01

    The failure of rectangular clutch spring of automatic slack adjuster directly affects the work of automatic slack adjuster. We establish the structural mechanics model of automatic slack adjuster rectangular clutch spring based on its working principle and mechanical structure. In addition, we upload such structural mechanics model to ANSYS Workbench FEA system to predict the fatigue life of rectangular clutch spring. FEA results show that the fatigue life of rectangular clutch spring is 2.0403×105 cycle under the effect of braking loads. In the meantime, fatigue tests of 20 automatic slack adjusters are carried out on the fatigue test bench to verify the conclusion of the structural mechanics model. The experimental results show that the mean fatigue life of rectangular clutch spring is 1.9101×105, which meets the results based on the finite element analysis using ANSYS Workbench FEA system.

  13. Determination of Extrapolation Distance with Measured Pressure Signatures from Two Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.; Kuhn, Neil

    2004-01-01

    A study to determine a limiting distance to span ratio for the extrapolation of near-field pressure signatures is described and discussed. This study was to be done in two wind-tunnel facilities with two wind-tunnel models. At this time, only the first half had been completed, so the scope of this report is limited to the design of the models, and to an analysis of the first set of measured pressure signatures. The results from this analysis showed that the pressure signatures measured at separation distances of 2 to 5 span lengths did not show the desired low-boom shapes. However, there were indications that the pressure signature shapes were becoming 'flat-topped'. This trend toward a 'flat-top' pressure signatures shape was seen to be a gradual one at the distance ratios employed in this first series of wind-tunnel tests.

  14. A formal framework of scenario creation and analysis of extreme hydrological events

    NASA Astrophysics Data System (ADS)

    Lohmann, D.

    2007-12-01

    We are presenting a formal framework for a hydrological risk analysis. Different measures of risk will be introduced, such as average annual loss or occurrence exceedance probability. These are important measures for e.g. insurance companies to determine the cost of insurance. One key aspect of investigating the potential consequences of extreme hydrological events (floods and draughts) is the creation of meteorological scenarios that reflect realistic spatial and temporal patterns of precipitation that also have correct local statistics. 100,000 years of these meteorological scenarios are used in a calibrated rainfall-runoff-flood-loss-risk model to produce flood and draught events that have never been observed. The results of this hazard model are statistically analyzed and linked to socio-economic data and vulnerability functions to show the impact of severe flood events. We are showing results from the Risk Management Solutions (RMS) Europe Flood Model to introduce this formal framework.

  15. Porosity Measurement in Laminated Composites by Thermography and FEA

    NASA Technical Reports Server (NTRS)

    Chu, Tsuchin Philip; Russell, Samuel S.; Walker, James L.; Munafo, Paul M. (Technical Monitor)

    2001-01-01

    This paper presents the correlation between the through-thickness thermal diffusivity and the porosity of composites. Finite element analysis (FEA) was used to determine the transient thermal response of composites that were subjected to laser heating. A series of finite element models were built and thermal responses for isotropic and orthographic materials with various thermal diffusivities subjected to different heating conditions were investigated. Experiments were conducted to verify the models and to estimate the unknown parameters such as the amount of heat flux. The analysis and experimental results show good correlation between thermal diffusivity and porosity in the composite materials. They also show that both laser and flash heating can be used effectively to obtain thermal diffusivity. The current infrared thermography system is developed for use with flash heating. The laser heating models and the FEA results can provide useful tools to develop practical thermal diffusivity measurement scheme using laser heat.

  16. Stability analysis of free piston Stirling engines

    NASA Astrophysics Data System (ADS)

    Bégot, Sylvie; Layes, Guillaume; Lanzetta, François; Nika, Philippe

    2013-03-01

    This paper presents a stability analysis of a free piston Stirling engine. The model and the detailed calculation of pressures losses are exposed. Stability of the machine is studied by the observation of the eigenvalues of the model matrix. Model validation based on the comparison with NASA experimental results is described. The influence of operational and construction parameters on performance and stability issues is exposed. The results show that most parameters that are beneficial for machine power seem to induce irregular mechanical characteristics with load, suggesting that self-sustained oscillations could be difficult to maintain and control.

  17. Comprehensive Analysis Modeling of Small-Scale UAS Rotors

    NASA Technical Reports Server (NTRS)

    Russell, Carl R.; Sekula, Martin K.

    2017-01-01

    Multicopter unmanned aircraft systems (UAS), or drones, have continued their explosive growth in recent years. With this growth comes demand for increased performance as the limits of existing technologies are reached. In order to better design multicopter UAS aircraft, better performance prediction tools are needed. This paper presents the results of a study aimed at using the rotorcraft comprehensive analysis code CAMRAD II to model a multicopter UAS rotor in hover. Parametric studies were performed to determine the level of fidelity needed in the analysis code inputs to achieve results that match test data. Overall, the results show that CAMRAD II is well suited to model small-scale UAS rotors in hover. This paper presents the results of the parametric studies as well as recommendations for the application of comprehensive analysis codes to multicopter UAS rotors.

  18. An unconventional antimicrobial protein histone from freshwater prawn Macrobrachium rosenbergii: analysis of immune properties.

    PubMed

    Arockiaraj, Jesu; Gnanam, Annie J; Kumaresan, Venkatesh; Palanisamy, Rajesh; Bhatt, Prasanth; Thirumalai, Muthukumaresan Kuppusamy; Roy, Arpita; Pasupuleti, Mukesh; Kasi, Marimuthu

    2013-11-01

    In this study, we have reported the first histone characterized at molecular level from freshwater prawn Macrobrachium rosenbergii (MrHis). A full length cDNA of MrHis (751 base pairs) was identified from an established M. rosenbergii cDNA library using GS-FLX technique. It encodes 137 amino acid residues with a calculated molecular mass of 15 kDa and an isoelectric point of 10.5. MrHis peptide contains a histone H2A signature between 21 and 27 amino acids. Homologous analysis showed that MrHis had a significant sequence identity (99%) with other known histone H2A groups especially from Penaeus monodon. Phylogenetic analysis of MrHis showed a strong relationship with other amino acid sequences from histone H2A arthropod groups. Further phylogenetic analysis showed that the MrHis belongs to histone H2A superfamily and H2A1A sub-family. Secondary structure of MrHis showed that the protein contains 50.36% α-helical region and 49.64% coils. The 3D model of MrHis was predicted by I-Tasser program and the model was evaluated for quality analysis including C-score analysis, Ramachandran plot analysis and RMSD analysis. The surface view analysis of MrHis showed the active domain at the N terminal. The antimicrobial property of MrHis protein was confirmed by the helical structure and the total hydrophobic surface along with its net charge. The MFE of the predicted RNA structure of MrHis is -128.62 kcal/mol, shows its mRNA stability. Schiffer-Edmundson helical wheel analysis of the N-terminal of MrHis showed a perfect amphipathic nature of the peptide. Significantly (P < 0.05) highest gene expression was noticed in the hemocyte and is induced with viral (WSBV and MrNV) and bacteria (A eromonas hydrophila and Vibrio harveyi) infections. The coding sequence of recombinant MrHis protein was expressed in a pMAL vector and purified to study the antimicrobial properties. The recombinant product showed antimicrobial activity against both Gram negative and Gram positive bacteria. In this study, the recombinant MrHis protein displayed antimicrobial activity in its entirety. Hence, it is possible to suggest that the activity may be due to the direct defense role of histone or its N-terminal antimicrobial property. However, this remains to be verified by detailed investigations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Structural models used in real-time biosurveillance outbreak detection and outbreak curve isolation from noisy background morbidity levels

    PubMed Central

    Cheng, Karen Elizabeth; Crary, David J; Ray, Jaideep; Safta, Cosmin

    2013-01-01

    Objective We discuss the use of structural models for the analysis of biosurveillance related data. Methods and results Using a combination of real and simulated data, we have constructed a data set that represents a plausible time series resulting from surveillance of a large scale bioterrorist anthrax attack in Miami. We discuss the performance of anomaly detection with structural models for these data using receiver operating characteristic (ROC) and activity monitoring operating characteristic (AMOC) analysis. In addition, we show that these techniques provide a method for predicting the level of the outbreak valid for approximately 2 weeks, post-alarm. Conclusions Structural models provide an effective tool for the analysis of biosurveillance data, in particular for time series with noisy, non-stationary background and missing data. PMID:23037798

  20. Comparisons of non-Gaussian statistical models in DNA methylation analysis.

    PubMed

    Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun

    2014-06-16

    As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.

Top