Sample records for proposed model compared

  1. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  2. A proposed model for economic evaluations of major depressive disorder.

    PubMed

    Haji Ali Afzali, Hossein; Karnon, Jonathan; Gray, Jodi

    2012-08-01

    In countries like UK and Australia, the comparability of model-based analyses is an essential aspect of reimbursement decisions for new pharmaceuticals, medical services and technologies. Within disease areas, the use of models with alternative structures, type of modelling techniques and/or data sources for common parameters reduces the comparability of evaluations of alternative technologies for the same condition. The aim of this paper is to propose a decision analytic model to evaluate long-term costs and benefits of alternative management options in patients with depression. The structure of the proposed model is based on the natural history of depression and includes clinical events that are important from both clinical and economic perspectives. Considering its greater flexibility with respect to handling time, discrete event simulation (DES) is an appropriate simulation platform for modelling studies of depression. We argue that the proposed model can be used as a reference model in model-based studies of depression improving the quality and comparability of studies.

  3. Proposed biokinetic model for phosphorus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leggett, Richard Wayne

    2014-06-04

    This paper reviews data related to the biokinetics of phosphorus in the human body and proposes a biokinetic model for systemic phosphorus for use in updated International Commission on Radiological Protection (ICRP) guidance on occupational intake of radionuclides. Compared with the ICRP s current occupational model for phosphorus (Publication 68, 1994) the proposed model provides a more realistic description of the paths of movement of phosphorus in the body and improved consistency with experimental, medical, and environmental data on the time-dependent distribution and retention of phosphorus following uptake to blood. For acute uptake of 32P to blood, the proposed modelmore » yields roughly a 50% decrease in dose estimates for bone surface and red marrow and a 6-fold increase in estimates for liver and kidney compared with the biokinetic model of Publication 68 (applying Publication 68 dosimetric models in both sets of calculations). For acute uptake of 33P to blood, the proposed model yields roughly a 50% increase in dose estimates for bone surface and red marrow and a 7-fold increase in estimates for liver and kidney compared with the model of Publication 68.« less

  4. An accurate fatigue damage model for welded joints subjected to variable amplitude loading

    NASA Astrophysics Data System (ADS)

    Aeran, A.; Siriwardane, S. C.; Mikkelsen, O.; Langen, I.

    2017-12-01

    Researchers in the past have proposed several fatigue damage models to overcome the shortcomings of the commonly used Miner’s rule. However, requirements of material parameters or S-N curve modifications restricts their practical applications. Also, application of most of these models under variable amplitude loading conditions have not been found. To overcome these restrictions, a new fatigue damage model is proposed in this paper. The proposed model can be applied by practicing engineers using only the S-N curve given in the standard codes of practice. The model is verified with experimentally derived damage evolution curves for C 45 and 16 Mn and gives better agreement compared to previous models. The model predicted fatigue lives are also in better correlation with experimental results compared to previous models as shown in earlier published work by the authors. The proposed model is applied to welded joints subjected to variable amplitude loadings in this paper. The model given around 8% shorter fatigue lives compared to Eurocode given Miner’s rule. This shows the importance of applying accurate fatigue damage models for welded joints.

  5. Contact analysis and experimental investigation of a linear ultrasonic motor.

    PubMed

    Lv, Qibao; Yao, Zhiyuan; Li, Xiang

    2017-11-01

    The effects of surface roughness are not considered in the traditional motor model which fails to reflect the actual contact mechanism between the stator and slider. An analytical model for calculating the tangential force of linear ultrasonic motor is proposed in this article. The presented model differs from the previous spring contact model, the asperities in contact between stator and slider are considered. The influences of preload and exciting voltage on tangential force in moving direction are analyzed. An experiment is performed to verify the feasibility of this proposed model by comparing the simulation results with the measured data. Moreover, the proposed model and spring model are compared. The results reveal that the proposed model is more accurate than spring model. The discussion is helpful for designing and modeling of linear ultrasonic motors. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  7. Linear and non-linear dynamic models of a geared rotor-bearing system

    NASA Technical Reports Server (NTRS)

    Kahraman, Ahmet; Singh, Rajendra

    1990-01-01

    A three degree of freedom non-linear model of a geared rotor-bearing system with gear backlash and radial clearances in rolling element bearings is proposed here. This reduced order model can be used to describe the transverse-torsional motion of the system. It is justified by comparing the eigen solutions yielded by corresponding linear model with the finite element method results. Nature of nonlinearities in bearings is examined and two approximate nonlinear stiffness functions are proposed. These approximate bearing models are verified by comparing their frequency responses with the results given by the exact form of nonlinearity. The proposed nonlinear dynamic model of the geared rotor-bearing system can be used to investigate the dynamic behavior and chaos.

  8. Perceptual video quality assessment in H.264 video coding standard using objective modeling.

    PubMed

    Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu

    2014-01-01

    Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.

  9. Coupled Inverted Pendula Model of Competition and Cooperation

    NASA Astrophysics Data System (ADS)

    Yoshida, Katsutoshi; Ohta, Hiroki

    A coupled inverted pendula model of competition and cooperation is proposed to develop a purely mechanical implementation comparable to the Lotka-Volterra competition model. It is shown numerically that the proposed model can produce the four stable equilibriums analogous to ecological coexistence, two states of dominance, and scramble. The authors also propose two types of open-loop strategies to switch the equilibriums. The proposed strategies can be associated with an attack and a counter attack of agents through a metaphor of martial arts.

  10. Lattice Boltzmann simulations of multiple-droplet interaction dynamics.

    PubMed

    Zhou, Wenchao; Loney, Drew; Fedorov, Andrei G; Degertekin, F Levent; Rosen, David W

    2014-03-01

    A lattice Boltzmann (LB) formulation, which is consistent with the phase-field model for two-phase incompressible fluid, is proposed to model the interface dynamics of droplet impingement. The interparticle force is derived by comparing the macroscopic transport equations recovered from LB equations with the governing equations of the continuous phase-field model. The inconsistency between the existing LB implementations and the phase-field model in calculating the relaxation time at the phase interface is identified and an approximation is proposed to ensure the consistency with the phase-field model. It is also shown that the commonly used equilibrium velocity boundary for the binary fluid LB scheme does not conserve momentum at the wall boundary and a modified scheme is developed to ensure the momentum conservation at the boundary. In addition, a geometric formulation of the wetting boundary condition is proposed to replace the popular surface energy formulation and results show that the geometric approach enforces the prescribed contact angle better than the surface energy formulation in both static and dynamic wetting. The proposed LB formulation is applied to simulating droplet impingement dynamics in three dimensions and results are compared to those obtained with the continuous phase-field model, the LB simulations reported in the literature, and experimental data from the literature. The results show that the proposed LB simulation approach yields not only a significant speed improvement over the phase-field model in simulating droplet impingement dynamics on a submillimeter length scale, but also better accuracy than both the phase-field model and the previously reported LB techniques when compared to experimental data. Upon validation, the proposed LB modeling methodology is applied to the study of multiple-droplet impingement and interactions in three dimensions, which demonstrates its powerful capability of simulating extremely complex interface phenomena.

  11. Low-complexity object detection with deep convolutional neural network for embedded systems

    NASA Astrophysics Data System (ADS)

    Tripathi, Subarna; Kang, Byeongkeun; Dane, Gokce; Nguyen, Truong

    2017-09-01

    We investigate low-complexity convolutional neural networks (CNNs) for object detection for embedded vision applications. It is well-known that consolidation of an embedded system for CNN-based object detection is more challenging due to computation and memory requirement comparing with problems like image classification. To achieve these requirements, we design and develop an end-to-end TensorFlow (TF)-based fully-convolutional deep neural network for generic object detection task inspired by one of the fastest framework, YOLO.1 The proposed network predicts the localization of every object by regressing the coordinates of the corresponding bounding box as in YOLO. Hence, the network is able to detect any objects without any limitations in the size of the objects. However, unlike YOLO, all the layers in the proposed network is fully-convolutional. Thus, it is able to take input images of any size. We pick face detection as an use case. We evaluate the proposed model for face detection on FDDB dataset and Widerface dataset. As another use case of generic object detection, we evaluate its performance on PASCAL VOC dataset. The experimental results demonstrate that the proposed network can predict object instances of different sizes and poses in a single frame. Moreover, the results show that the proposed method achieves comparative accuracy comparing with the state-of-the-art CNN-based object detection methods while reducing the model size by 3× and memory-BW by 3 - 4× comparing with one of the best real-time CNN-based object detectors, YOLO. Our 8-bit fixed-point TF-model provides additional 4× memory reduction while keeping the accuracy nearly as good as the floating-point model. Moreover, the fixed- point model is capable of achieving 20× faster inference speed comparing with the floating-point model. Thus, the proposed method is promising for embedded implementations.

  12. The Emergent European Model in Skill Formation: Comparing Higher Education and Vocational Training in the Bologna and Copenhagen Processes

    ERIC Educational Resources Information Center

    Powell, Justin J. W.; Bernhard, Nadine; Graf, Lukas

    2012-01-01

    Proposing an alternative to the American model, intergovernmental reform initiatives in Europe have developed and promote a comprehensive European model of skill formation. What ideals, standards, and governance are proposed in this new pan-European model? This model responds to heightened global competition among "knowledge societies"…

  13. Differential Topic Models.

    PubMed

    Chen, Changyou; Buntine, Wray; Ding, Nan; Xie, Lexing; Du, Lan

    2015-02-01

    In applications we may want to compare different document collections: they could have shared content but also different and unique aspects in particular collections. This task has been called comparative text mining or cross-collection modeling. We present a differential topic model for this application that models both topic differences and similarities. For this we use hierarchical Bayesian nonparametric models. Moreover, we found it was important to properly model power-law phenomena in topic-word distributions and thus we used the full Pitman-Yor process rather than just a Dirichlet process. Furthermore, we propose the transformed Pitman-Yor process (TPYP) to incorporate prior knowledge such as vocabulary variations in different collections into the model. To deal with the non-conjugate issue between model prior and likelihood in the TPYP, we thus propose an efficient sampling algorithm using a data augmentation technique based on the multinomial theorem. Experimental results show the model discovers interesting aspects of different collections. We also show the proposed MCMC based algorithm achieves a dramatically reduced test perplexity compared to some existing topic models. Finally, we show our model outperforms the state-of-the-art for document classification/ideology prediction on a number of text collections.

  14. Nonlinear system identification based on Takagi-Sugeno fuzzy modeling and unscented Kalman filter.

    PubMed

    Vafamand, Navid; Arefi, Mohammad Mehdi; Khayatian, Alireza

    2018-03-01

    This paper proposes two novel Kalman-based learning algorithms for an online Takagi-Sugeno (TS) fuzzy model identification. The proposed approaches are designed based on the unscented Kalman filter (UKF) and the concept of dual estimation. Contrary to the extended Kalman filter (EKF) which utilizes derivatives of nonlinear functions, the UKF employs the unscented transformation. Consequently, non-differentiable membership functions can be considered in the structure of the TS models. This makes the proposed algorithms to be applicable for the online parameter calculation of wider classes of TS models compared to the recently published papers concerning the same issue. Furthermore, because of the great capability of the UKF in handling severe nonlinear dynamics, the proposed approaches can effectively approximate the nonlinear systems. Finally, numerical and practical examples are provided to show the advantages of the proposed approaches. Simulation results reveal the effectiveness of the proposed methods and performance improvement based on the root mean square (RMS) of the estimation error compared to the existing results. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Assessment of models proposed for the 1981 revision of the IGRF.

    USGS Publications Warehouse

    Peddie, N.W.; Fabiano, E.B.

    1982-01-01

    For the second revision of the International Geomagnetic Reference Field (IGRF), the US National Aeronautics and Space Administration (NASA), the UK Institute of Geological Sciences (IGS), and the US Geological Survey (USGS) submitted proposed models of the Earth's main magnetic field at 1965.0, 1970.0, 1975.0, and 1980.0, and its secular variation during 1980-1985. We assessed the proposed models by comparing them with annual mean values from worldwide magnetic observatories data for 1978-1980 from 63 US magnetic repeat stations, and rates-of-change values for worldwide magnetic observatories for 1965-1985 that were derived from straight lines fitted to annual means for 5-yr intervals. We also mutually compared the 1980 models.-from Authors

  16. Bouc-Wen hysteresis model identification using Modified Firefly Algorithm

    NASA Astrophysics Data System (ADS)

    Zaman, Mohammad Asif; Sikder, Urmita

    2015-12-01

    The parameters of Bouc-Wen hysteresis model are identified using a Modified Firefly Algorithm. The proposed algorithm uses dynamic process control parameters to improve its performance. The algorithm is used to find the model parameter values that results in the least amount of error between a set of given data points and points obtained from the Bouc-Wen model. The performance of the algorithm is compared with the performance of conventional Firefly Algorithm, Genetic Algorithm and Differential Evolution algorithm in terms of convergence rate and accuracy. Compared to the other three optimization algorithms, the proposed algorithm is found to have good convergence rate with high degree of accuracy in identifying Bouc-Wen model parameters. Finally, the proposed method is used to find the Bouc-Wen model parameters from experimental data. The obtained model is found to be in good agreement with measured data.

  17. Evaluation of models proposed for the 1991 revision of the International Geomagnetic Reference Field

    USGS Publications Warehouse

    Peddie, N.W.

    1992-01-01

    The 1991 revision of the International Geomagnetic Reference Field (IGRF) comprises a definitive main-field model for 1985.0, a main-field model for 1990.0, and a forecast secular-variation model for the period 1990-1995. The five 1985.0 main-field models and five 1990.0 main-field models that were proposed have been evaluated by comparing them with one another, with magnetic observatory data, and with Project MAGNET aerial survey data. The comparisons indicate that the main-field models proposed by IZMIRAN, and the secular-variation model proposed jointly by the British Geological Survey and the US Naval Oceanographic Office, should be assigned relatively lower weight in the derivation of the new IGRF models. -Author

  18. Hybrid surrogate-model-based multi-fidelity efficient global optimization applied to helicopter blade design

    NASA Astrophysics Data System (ADS)

    Ariyarit, Atthaphon; Sugiura, Masahiko; Tanabe, Yasutada; Kanazaki, Masahiro

    2018-06-01

    A multi-fidelity optimization technique by an efficient global optimization process using a hybrid surrogate model is investigated for solving real-world design problems. The model constructs the local deviation using the kriging method and the global model using a radial basis function. The expected improvement is computed to decide additional samples that can improve the model. The approach was first investigated by solving mathematical test problems. The results were compared with optimization results from an ordinary kriging method and a co-kriging method, and the proposed method produced the best solution. The proposed method was also applied to aerodynamic design optimization of helicopter blades to obtain the maximum blade efficiency. The optimal shape obtained by the proposed method achieved performance almost equivalent to that obtained using the high-fidelity, evaluation-based single-fidelity optimization. Comparing all three methods, the proposed method required the lowest total number of high-fidelity evaluation runs to obtain a converged solution.

  19. A robust and fast active contour model for image segmentation with intensity inhomogeneity

    NASA Astrophysics Data System (ADS)

    Ding, Keyan; Weng, Guirong

    2018-04-01

    In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.

  20. Modeling Progressive Damage Using Local Displacement Discontinuities Within the FEAMAC Multiscale Modeling Framework

    NASA Technical Reports Server (NTRS)

    Ranatunga, Vipul; Bednarcyk, Brett A.; Arnold, Steven M.

    2010-01-01

    A method for performing progressive damage modeling in composite materials and structures based on continuum level interfacial displacement discontinuities is presented. The proposed method enables the exponential evolution of the interfacial compliance, resulting in unloading of the tractions at the interface after delamination or failure occurs. In this paper, the proposed continuum displacement discontinuity model has been used to simulate failure within both isotropic and orthotropic materials efficiently and to explore the possibility of predicting the crack path, therein. Simulation results obtained from Mode-I and Mode-II fracture compare the proposed approach with the cohesive element approach and Virtual Crack Closure Techniques (VCCT) available within the ABAQUS (ABAQUS, Inc.) finite element software. Furthermore, an eccentrically loaded 3-point bend test has been simulated with the displacement discontinuity model, and the resulting crack path prediction has been compared with a prediction based on the extended finite element model (XFEM) approach.

  1. Computer-assisted quantification of the skull deformity for craniosynostosis from 3D head CT images using morphological descriptor and hierarchical classification

    NASA Astrophysics Data System (ADS)

    Lee, Min Jin; Hong, Helen; Shim, Kyu Won; Kim, Yong Oock

    2017-03-01

    This paper proposes morphological descriptors representing the degree of skull deformity for craniosynostosis in head CT images and a hierarchical classifier model distinguishing among normal and different types of craniosynostosis. First, to compare deformity surface model with mean normal surface model, mean normal surface models are generated for each age range and the mean normal surface model is deformed to the deformity surface model via multi-level threestage registration. Second, four shape features including local distance and area ratio indices are extracted in each five cranial bone. Finally, hierarchical SVM classifier is proposed to distinguish between the normal and deformity. As a result, the proposed method showed improved classification results compared to traditional cranial index. Our method can be used for the early diagnosis, surgical planning and postsurgical assessment of craniosynostosis as well as quantitative analysis of skull deformity.

  2. A new enhanced index tracking model in portfolio optimization with sum weighted approach

    NASA Astrophysics Data System (ADS)

    Siew, Lam Weng; Jaaman, Saiful Hafizah; Hoe, Lam Weng

    2017-04-01

    Index tracking is a portfolio management which aims to construct the optimal portfolio to achieve similar return with the benchmark index return at minimum tracking error without purchasing all the stocks that make up the index. Enhanced index tracking is an improved portfolio management which aims to generate higher portfolio return than the benchmark index return besides minimizing the tracking error. The objective of this paper is to propose a new enhanced index tracking model with sum weighted approach to improve the existing index tracking model for tracking the benchmark Technology Index in Malaysia. The optimal portfolio composition and performance of both models are determined and compared in terms of portfolio mean return, tracking error and information ratio. The results of this study show that the optimal portfolio of the proposed model is able to generate higher mean return than the benchmark index at minimum tracking error. Besides that, the proposed model is able to outperform the existing model in tracking the benchmark index. The significance of this study is to propose a new enhanced index tracking model with sum weighted apporach which contributes 67% improvement on the portfolio mean return as compared to the existing model.

  3. A general probabilistic model for group independent component analysis and its estimation methods

    PubMed Central

    Guo, Ying

    2012-01-01

    SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789

  4. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2010-01-01

    This paper is devoted to the design and analysis of a predictor-based model reference adaptive control. Stable adaptive laws are derived using Lyapunov framework. The proposed architecture is compared with the now classical model reference adaptive control. A simulation example is presented in which numerical evidence indicates that the proposed controller yields improved transient characteristics.

  5. Predicting wetland plant community responses to proposed water-level-regulation plans for Lake Ontario: GIS-based modeling

    USGS Publications Warehouse

    Wilcox, D.A.; Xie, Y.

    2007-01-01

    Integrated, GIS-based, wetland predictive models were constructed to assist in predicting the responses of wetland plant communities to proposed new water-level regulation plans for Lake Ontario. The modeling exercise consisted of four major components: 1) building individual site wetland geometric models; 2) constructing generalized wetland geometric models representing specific types of wetlands (rectangle model for drowned river mouth wetlands, half ring model for open embayment wetlands, half ellipse model for protected embayment wetlands, and ellipse model for barrier beach wetlands); 3) assigning wetland plant profiles to the generalized wetland geometric models that identify associations between past flooding / dewatering events and the regulated water-level changes of a proposed water-level-regulation plan; and 4) predicting relevant proportions of wetland plant communities and the time durations during which they would be affected under proposed regulation plans. Based on this conceptual foundation, the predictive models were constructed using bathymetric and topographic wetland models and technical procedures operating on the platform of ArcGIS. An example of the model processes and outputs for the drowned river mouth wetland model using a test regulation plan illustrates the four components and, when compared against other test regulation plans, provided results that met ecological expectations. The model results were also compared to independent data collected by photointerpretation. Although data collections were not directly comparable, the predicted extent of meadow marsh in years in which photographs were taken was significantly correlated with extent of mapped meadow marsh in all but barrier beach wetlands. The predictive model for wetland plant communities provided valuable input into International Joint Commission deliberations on new regulation plans and was also incorporated into faunal predictive models used for that purpose.

  6. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  7. A comparative study of mixed exponential and Weibull distributions in a stochastic model replicating a tropical rainfall process

    NASA Astrophysics Data System (ADS)

    Abas, Norzaida; Daud, Zalina M.; Yusof, Fadhilah

    2014-11-01

    A stochastic rainfall model is presented for the generation of hourly rainfall data in an urban area in Malaysia. In view of the high temporal and spatial variability of rainfall within the tropical rain belt, the Spatial-Temporal Neyman-Scott Rectangular Pulse model was used. The model, which is governed by the Neyman-Scott process, employs a reasonable number of parameters to represent the physical attributes of rainfall. A common approach is to attach each attribute to a mathematical distribution. With respect to rain cell intensity, this study proposes the use of a mixed exponential distribution. The performance of the proposed model was compared to a model that employs the Weibull distribution. Hourly and daily rainfall data from four stations in the Damansara River basin in Malaysia were used as input to the models, and simulations of hourly series were performed for an independent site within the basin. The performance of the models was assessed based on how closely the statistical characteristics of the simulated series resembled the statistics of the observed series. The findings obtained based on graphical representation revealed that the statistical characteristics of the simulated series for both models compared reasonably well with the observed series. However, a further assessment using the AIC, BIC and RMSE showed that the proposed model yields better results. The results of this study indicate that for tropical climates, the proposed model, using a mixed exponential distribution, is the best choice for generation of synthetic data for ungauged sites or for sites with insufficient data within the limit of the fitted region.

  8. Proposed evaluation framework for assessing operator performance with multisensor displays

    NASA Technical Reports Server (NTRS)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  9. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  10. An equivalent viscoelastic model for rock mass with parallel joints

    NASA Astrophysics Data System (ADS)

    Li, Jianchun; Ma, Guowei; Zhao, Jian

    2010-03-01

    An equivalent viscoelastic medium model is proposed for rock mass with parallel joints. A concept of "virtual wave source (VWS)" is proposed to take into account the wave reflections between the joints. The equivalent model can be effectively applied to analyze longitudinal wave propagation through discontinuous media with parallel joints. Parameters in the equivalent viscoelastic model are derived analytically based on longitudinal wave propagation across a single rock joint. The proposed model is then verified by applying identical incident waves to the discontinuous and equivalent viscoelastic media at one end to compare the output waves at the other end. When the wavelength of the incident wave is sufficiently long compared to the joint spacing, the effect of the VWS on wave propagation in rock mass is prominent. The results from the equivalent viscoelastic medium model are very similar to those determined from the displacement discontinuity method. Frequency dependence and joint spacing effect on the equivalent viscoelastic model and the VWS method are discussed.

  11. Computer evaluation of existing and proposed fire lookouts

    Treesearch

    Romain M. Mees

    1976-01-01

    A computer simulation model has been developed for evaluating the fire detection capabilities of existing and proposed lookout stations. The model uses coordinate location of fires and lookouts, tower elevation, and topographic data to judge location of stations, and to determine where a fire can be seen. The model was tested by comparing it with manual detection on a...

  12. Spatiotemporal Interpolation for Environmental Modelling

    PubMed Central

    Susanto, Ferry; de Souza, Paulo; He, Jing

    2016-01-01

    A variation of the reduction-based approach to spatiotemporal interpolation (STI), in which time is treated independently from the spatial dimensions, is proposed in this paper. We reviewed and compared three widely-used spatial interpolation techniques: ordinary kriging, inverse distance weighting and the triangular irregular network. We also proposed a new distribution-based distance weighting (DDW) spatial interpolation method. In this study, we utilised one year of Tasmania’s South Esk Hydrology model developed by CSIRO. Root mean squared error statistical methods were performed for performance evaluations. Our results show that the proposed reduction approach is superior to the extension approach to STI. However, the proposed DDW provides little benefit compared to the conventional inverse distance weighting (IDW) method. We suggest that the improved IDW technique, with the reduction approach used for the temporal dimension, is the optimal combination for large-scale spatiotemporal interpolation within environmental modelling applications. PMID:27509497

  13. Frequentist Model Averaging in Structural Equation Modelling.

    PubMed

    Jin, Shaobo; Ankargren, Sebastian

    2018-06-04

    Model selection from a set of candidate models plays an important role in many structural equation modelling applications. However, traditional model selection methods introduce extra randomness that is not accounted for by post-model selection inference. In the current study, we propose a model averaging technique within the frequentist statistical framework. Instead of selecting an optimal model, the contributions of all candidate models are acknowledged. Valid confidence intervals and a [Formula: see text] test statistic are proposed. A simulation study shows that the proposed method is able to produce a robust mean-squared error, a better coverage probability, and a better goodness-of-fit test compared to model selection. It is an interesting compromise between model selection and the full model.

  14. Using the weighted area under the net benefit curve for decision curve analysis.

    PubMed

    Talluri, Rajesh; Shete, Sanjay

    2016-07-18

    Risk prediction models have been proposed for various diseases and are being improved as new predictors are identified. A major challenge is to determine whether the newly discovered predictors improve risk prediction. Decision curve analysis has been proposed as an alternative to the area under the curve and net reclassification index to evaluate the performance of prediction models in clinical scenarios. The decision curve computed using the net benefit can evaluate the predictive performance of risk models at a given or range of threshold probabilities. However, when the decision curves for 2 competing models cross in the range of interest, it is difficult to identify the best model as there is no readily available summary measure for evaluating the predictive performance. The key deterrent for using simple measures such as the area under the net benefit curve is the assumption that the threshold probabilities are uniformly distributed among patients. We propose a novel measure for performing decision curve analysis. The approach estimates the distribution of threshold probabilities without the need of additional data. Using the estimated distribution of threshold probabilities, the weighted area under the net benefit curve serves as the summary measure to compare risk prediction models in a range of interest. We compared 3 different approaches, the standard method, the area under the net benefit curve, and the weighted area under the net benefit curve. Type 1 error and power comparisons demonstrate that the weighted area under the net benefit curve has higher power compared to the other methods. Several simulation studies are presented to demonstrate the improvement in model comparison using the weighted area under the net benefit curve compared to the standard method. The proposed measure improves decision curve analysis by using the weighted area under the curve and thereby improves the power of the decision curve analysis to compare risk prediction models in a clinical scenario.

  15. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    NASA Astrophysics Data System (ADS)

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  16. On the Formulation of Anisotropic-Polyaxial Failure Criteria: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Parisio, Francesco; Laloui, Lyesse

    2018-02-01

    The correct representation of the failure of geomaterials that feature strength anisotropy and polyaxiality is crucial for many applications. In this contribution, we propose and evaluate through a comparative study a generalized framework that covers both features. Polyaxiality of strength is modeled with a modified Van Eekelen approach, while the anisotropy is modeled using a fabric tensor approach of the Pietruszczak and Mroz type. Both approaches share the same philosophy as they can be applied to simpler failure surfaces, allowing great flexibility in model formulation. The new failure surface is tested against experimental data and its performance compared against classical failure criteria commonly used in geomechanics. Our study finds that the global error between predictions and data is generally smaller for the proposed framework compared to other classical approaches.

  17. Agent based reasoning for the non-linear stochastic models of long-range memory

    NASA Astrophysics Data System (ADS)

    Kononovicius, A.; Gontis, V.

    2012-02-01

    We extend Kirman's model by introducing variable event time scale. The proposed flexible time scale is equivalent to the variable trading activity observed in financial markets. Stochastic version of the extended Kirman's agent based model is compared to the non-linear stochastic models of long-range memory in financial markets. The agent based model providing matching macroscopic description serves as a microscopic reasoning of the earlier proposed stochastic model exhibiting power law statistics.

  18. Self-, other-, and joint monitoring using forward models.

    PubMed

    Pickering, Martin J; Garrod, Simon

    2014-01-01

    In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people's speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker's production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment.

  19. Self-, other-, and joint monitoring using forward models

    PubMed Central

    Pickering, Martin J.; Garrod, Simon

    2014-01-01

    In the psychology of language, most accounts of self-monitoring assume that it is based on comprehension. Here we outline and develop the alternative account proposed by Pickering and Garrod (2013), in which speakers construct forward models of their upcoming utterances and compare them with the utterance as they produce them. We propose that speakers compute inverse models derived from the discrepancy (error) between the utterance and the predicted utterance and use that to modify their production command or (occasionally) begin anew. We then propose that comprehenders monitor other people’s speech by simulating their utterances using covert imitation and forward models, and then comparing those forward models with what they hear. They use the discrepancy to compute inverse models and modify their representation of the speaker’s production command, or realize that their representation is incorrect and may develop a new production command. We then discuss monitoring in dialogue, paying attention to sequential contributions, concurrent feedback, and the relationship between monitoring and alignment. PMID:24723869

  20. A Novel Hybrid Classification Model of Genetic Algorithms, Modified k-Nearest Neighbor and Developed Backpropagation Neural Network

    PubMed Central

    Salari, Nader; Shohaimi, Shamarina; Najafi, Farid; Nallappan, Meenakshii; Karishnarajah, Isthrinayagy

    2014-01-01

    Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher's discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models. PMID:25419659

  1. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  2. Modeling and characterization of supercapacitors for wireless sensor network applications

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Yang, Hengzhao

    A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.

  3. Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model

    PubMed Central

    Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon

    2015-01-01

    In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172

  4. A Penalized Likelihood Framework For High-Dimensional Phylogenetic Comparative Methods And An Application To New-World Monkeys Brain Evolution.

    PubMed

    Julien, Clavel; Leandro, Aristide; Hélène, Morlon

    2018-06-19

    Working with high-dimensional phylogenetic comparative datasets is challenging because likelihood-based multivariate methods suffer from low statistical performances as the number of traits p approaches the number of species n and because some computational complications occur when p exceeds n. Alternative phylogenetic comparative methods have recently been proposed to deal with the large p small n scenario but their use and performances are limited. Here we develop a penalized likelihood framework to deal with high-dimensional comparative datasets. We propose various penalizations and methods for selecting the intensity of the penalties. We apply this general framework to the estimation of parameters (the evolutionary trait covariance matrix and parameters of the evolutionary model) and model comparison for the high-dimensional multivariate Brownian (BM), Early-burst (EB), Ornstein-Uhlenbeck (OU) and Pagel's lambda models. We show using simulations that our penalized likelihood approach dramatically improves the estimation of evolutionary trait covariance matrices and model parameters when p approaches n, and allows for their accurate estimation when p equals or exceeds n. In addition, we show that penalized likelihood models can be efficiently compared using Generalized Information Criterion (GIC). We implement these methods, as well as the related estimation of ancestral states and the computation of phylogenetic PCA in the R package RPANDA and mvMORPH. Finally, we illustrate the utility of the new proposed framework by evaluating evolutionary models fit, analyzing integration patterns, and reconstructing evolutionary trajectories for a high-dimensional 3-D dataset of brain shape in the New World monkeys. We find a clear support for an Early-burst model suggesting an early diversification of brain morphology during the ecological radiation of the clade. Penalized likelihood offers an efficient way to deal with high-dimensional multivariate comparative data.

  5. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  6. Comparison of power curve monitoring methods

    NASA Astrophysics Data System (ADS)

    Cambron, Philippe; Masson, Christian; Tahan, Antoine; Torres, David; Pelletier, Francis

    2017-11-01

    Performance monitoring is an important aspect of operating wind farms. This can be done through the power curve monitoring (PCM) of wind turbines (WT). In the past years, important work has been conducted on PCM. Various methodologies have been proposed, each one with interesting results. However, it is difficult to compare these methods because they have been developed using their respective data sets. The objective of this actual work is to compare some of the proposed PCM methods using common data sets. The metric used to compare the PCM methods is the time needed to detect a change in the power curve. Two power curve models will be covered to establish the effect the model type has on the monitoring outcomes. Each model was tested with two control charts. Other methodologies and metrics proposed in the literature for power curve monitoring such as areas under the power curve and the use of statistical copulas have also been covered. Results demonstrate that model-based PCM methods are more reliable at the detecting a performance change than other methodologies and that the effectiveness of the control chart depends on the types of shift observed.

  7. Improving the performances of autofocus based on adaptive retina-like sampling model

    NASA Astrophysics Data System (ADS)

    Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce

    2018-03-01

    An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.

  8. High pressure common rail injection system modeling and control.

    PubMed

    Wang, H P; Zheng, D; Tian, Y

    2016-07-01

    In this paper modeling and common-rail pressure control of high pressure common rail injection system (HPCRIS) is presented. The proposed mathematical model of high pressure common rail injection system which contains three sub-systems: high pressure pump sub-model, common rail sub-model and injector sub-model is a relative complicated nonlinear system. The mathematical model is validated by the software Matlab and a virtual detailed simulation environment. For the considered HPCRIS, an effective model free controller which is called Extended State Observer - based intelligent Proportional Integral (ESO-based iPI) controller is designed. And this proposed method is composed mainly of the referred ESO observer, and a time delay estimation based iPI controller. Finally, to demonstrate the performances of the proposed controller, the proposed ESO-based iPI controller is compared with a conventional PID controller and ADRC. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems.

    PubMed

    Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng

    2017-01-01

    A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client's requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users' access behaviors and all tiles' relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users' access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods.

  10. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian; Scherzinger, William

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  11. Constitutive Modeling of Piezoelectric Polymer Composites

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Tom (Technical Monitor)

    2003-01-01

    A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.

  12. Connected word recognition using a cascaded neuro-computational model

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  13. Inverse Gaussian gamma distribution model for turbulence-induced fading in free-space optical communication.

    PubMed

    Cheng, Mingjian; Guo, Ya; Li, Jiangting; Zheng, Xiaotong; Guo, Lixin

    2018-04-20

    We introduce an alternative distribution to the gamma-gamma (GG) distribution, called inverse Gaussian gamma (IGG) distribution, which can efficiently describe moderate-to-strong irradiance fluctuations. The proposed stochastic model is based on a modulation process between small- and large-scale irradiance fluctuations, which are modeled by gamma and inverse Gaussian distributions, respectively. The model parameters of the IGG distribution are directly related to atmospheric parameters. The accuracy of the fit among the IGG, log-normal, and GG distributions with the experimental probability density functions in moderate-to-strong turbulence are compared, and results indicate that the newly proposed IGG model provides an excellent fit to the experimental data. As the receiving diameter is comparable with the atmospheric coherence radius, the proposed IGG model can reproduce the shape of the experimental data, whereas the GG and LN models fail to match the experimental data. The fundamental channel statistics of a free-space optical communication system are also investigated in an IGG-distributed turbulent atmosphere, and a closed-form expression for the outage probability of the system is derived with Meijer's G-function.

  14. A Microstructure-Based Constitutive Model for Superplastic Forming

    NASA Astrophysics Data System (ADS)

    Jafari Nedoushan, Reza; Farzin, Mahmoud; Mashayekhi, Mohammad; Banabic, Dorel

    2012-11-01

    A constitutive model is proposed for simulations of hot metal forming processes. This model is constructed based on dominant mechanisms that take part in hot forming and includes intergranular deformation, grain boundary sliding, and grain boundary diffusion. A Taylor type polycrystalline model is used to predict intergranular deformation. Previous works on grain boundary sliding and grain boundary diffusion are extended to drive three-dimensional macro stress-strain rate relationships for each mechanism. In these relationships, the effect of grain size is also taken into account. The proposed model is first used to simulate step strain-rate tests and the results are compared with experimental data. It is shown that the model can be used to predict flow stresses for various grain sizes and strain rates. The yield locus is then predicted for multiaxial stress states, and it is observed that it is very close to the von Mises yield criterion. It is also shown that the proposed model can be directly used to simulate hot forming processes. Bulge forming process and gas pressure tray forming are simulated, and the results are compared with experimental data.

  15. Improved CORF model of simple cell combined with non-classical receptive field and its application on edge detection

    NASA Astrophysics Data System (ADS)

    Sun, Xiao; Chai, Guobei; Liu, Wei; Bao, Wenzhuo; Zhao, Xiaoning; Ming, Delie

    2018-02-01

    Simple cells in primary visual cortex are believed to extract local edge information from a visual scene. In this paper, inspired by different receptive field properties and visual information flow paths of neurons, an improved Combination of Receptive Fields (CORF) model combined with non-classical receptive fields was proposed to simulate the responses of simple cell's receptive fields. Compared to the classical model, the proposed model is able to better imitate simple cell's physiologic structure with consideration of facilitation and suppression of non-classical receptive fields. And on this base, an edge detection algorithm as an application of the improved CORF model was proposed. Experimental results validate the robustness of the proposed algorithm to noise and background interference.

  16. Tracking trade transactions in water resource systems: A node-arc optimization formulation

    NASA Astrophysics Data System (ADS)

    Erfani, Tohid; Huskova, Ivana; Harou, Julien J.

    2013-05-01

    We formulate and apply a multicommodity network flow node-arc optimization model capable of tracking trade transactions in complex water resource systems. The model uses a simple node to node network connectivity matrix and does not require preprocessing of all possible flow paths in the network. We compare the proposed node-arc formulation with an existing arc-path (flow path) formulation and explain the advantages and difficulties of both approaches. We verify the proposed formulation model on a hypothetical water distribution network. Results indicate the arc-path model solves the problem with fewer constraints, but the proposed formulation allows using a simple network connectivity matrix which simplifies modeling large or complex networks. The proposed algorithm allows converting existing node-arc hydroeconomic models that broadly represent water trading to ones that also track individual supplier-receiver relationships (trade transactions).

  17. Parameterized data-driven fuzzy model based optimal control of a semi-batch reactor.

    PubMed

    Kamesh, Reddi; Rani, K Yamuna

    2016-09-01

    A parameterized data-driven fuzzy (PDDF) model structure is proposed for semi-batch processes, and its application for optimal control is illustrated. The orthonormally parameterized input trajectories, initial states and process parameters are the inputs to the model, which predicts the output trajectories in terms of Fourier coefficients. Fuzzy rules are formulated based on the signs of a linear data-driven model, while the defuzzification step incorporates a linear regression model to shift the domain from input to output domain. The fuzzy model is employed to formulate an optimal control problem for single rate as well as multi-rate systems. Simulation study on a multivariable semi-batch reactor system reveals that the proposed PDDF modeling approach is capable of capturing the nonlinear and time-varying behavior inherent in the semi-batch system fairly accurately, and the results of operating trajectory optimization using the proposed model are found to be comparable to the results obtained using the exact first principles model, and are also found to be comparable to or better than parameterized data-driven artificial neural network model based optimization results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. A Non-Intrusive Pressure Sensor by Detecting Multiple Longitudinal Waves

    PubMed Central

    Zhou, Hongliang; Lin, Weibin; Ge, Xiaocheng; Zhou, Jian

    2016-01-01

    Pressure vessels are widely used in industrial fields, and some of them are safety-critical components in the system—for example, those which contain flammable or explosive material. Therefore, the pressure of these vessels becomes one of the critical measurements for operational management. In the paper, we introduce a new approach to the design of non-intrusive pressure sensors, based on ultrasonic waves. The model of this sensor is built based upon the travel-time change of the critically refracted longitudinal wave (LCR wave) and the reflected longitudinal waves with the pressure. To evaluate the model, experiments are carried out to compare the proposed model with other existing models. The results show that the proposed model can improve the accuracy compared to models based on a single wave. PMID:27527183

  19. Methodologies for validating ray-based forward model using finite element method in ultrasonic array data simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Nixon, Andrew; Barber, Tom; Budyn, Nicolas; Bevan, Rhodri; Croxford, Anthony; Wilcox, Paul

    2018-04-01

    In this paper, a methodology of using finite element (FE) model to validate a ray-based model in the simulation of full matrix capture (FMC) ultrasonic array data set is proposed. The overall aim is to separate signal contributions from different interactions in FE results for easier comparing each individual component in the ray-based model results. This is achieved by combining the results from multiple FE models of the system of interest that include progressively more geometrical features while preserving the same mesh structure. It is shown that the proposed techniques allow the interactions from a large number of different ray-paths to be isolated in FE results and compared directly to the results from a ray-based forward model.

  20. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections.

    PubMed

    Stacul, Stefano; Squeglia, Nunziante

    2018-02-15

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ.

  1. Analysis Method for Laterally Loaded Pile Groups Using an Advanced Modeling of Reinforced Concrete Sections

    PubMed Central

    2018-01-01

    A Boundary Element Method (BEM) approach was developed for the analysis of pile groups. The proposed method includes: the non-linear behavior of the soil by a hyperbolic modulus reduction curve; the non-linear response of reinforced concrete pile sections, also taking into account the influence of tension stiffening; the influence of suction by increasing the stiffness of shallow portions of soil and modeled using the Modified Kovacs model; pile group shadowing effect, modeled using an approach similar to that proposed in the Strain Wedge Model for pile groups analyses. The proposed BEM method saves computational effort compared to more sophisticated codes such as VERSAT-P3D, PLAXIS 3D and FLAC-3D, and provides reliable results using input data from a standard site investigation. The reliability of this method was verified by comparing results from data from full scale and centrifuge tests on single piles and pile groups. A comparison is presented between measured and computed data on a laterally loaded fixed-head pile group composed by reinforced concrete bored piles. The results of the proposed method are shown to be in good agreement with those obtained in situ. PMID:29462857

  2. A Probabilistic Model for Sediment Entrainment: the Role of Bed Irregularity

    NASA Astrophysics Data System (ADS)

    Thanos Papanicolaou, A. N.

    2017-04-01

    A generalized probabilistic model is developed in this study to predict sediment entrainment under the incipient motion, rolling, and pickup modes. A novelty of the proposed model is that it incorporates in its formulation the probability density function of the bed shear stress, instead of the near-bed velocity fluctuations, to account for the effects of both flow turbulence and bed surface irregularity on sediment entrainment. The proposed model incorporates in its formulation the collective effects of three parameters describing bed surface irregularity, namely the relative roughness, the volumetric fraction and relative position of sediment particles within the active layer. Another key feature of the model is that it provides a criterion for estimating the lift and drag coefficients jointly based on the recognition that lift and drag forces acting on sediment particles are interdependent and vary with particle protrusion and packing density. The model was validated using laboratory data of both fine and coarse sediment and was compared with previously published models. The study results show that for the fine sediment data, where the sediment particles have more uniform gradation and relative roughness is not a factor, all the examined models perform adequately. The proposed model was particularly suited for the coarse sediment data, where the increased bed irregularity was captured by the new parameters introduced in the model formulations. As a result, the proposed model yielded smaller prediction errors and physically acceptable values for the lift coefficient compared to the other models in case of the coarse sediment data.

  3. Rational GARCH model: An empirical test for stock returns

    NASA Astrophysics Data System (ADS)

    Takaishi, Tetsuya

    2017-05-01

    We propose a new ARCH-type model that uses a rational function to capture the asymmetric response of volatility to returns, known as the "leverage effect". Using 10 individual stocks on the Tokyo Stock Exchange and two stock indices, we compare the new model with several other asymmetric ARCH-type models. We find that according to the deviance information criterion, the new model ranks first for several stocks. Results show that the proposed new model can be used as an alternative asymmetric ARCH-type model in empirical applications.

  4. Categorical Data Analysis Using a Skewed Weibull Regression Model

    NASA Astrophysics Data System (ADS)

    Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano

    2018-03-01

    In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.

  5. A biokinetic model for systemic nickel

    DOE PAGES

    Melo, Dunstana; Leggett, Richard Wayne

    2017-01-01

    The International Commission on Radiological Protection (ICRP) is updating its suite of reference biokinetic models for internally deposited radionuclides. This paper reviews data for nickel and proposes an updated biokinetic model for systemic (absorbed) nickel in adult humans for use in radiation protection. Compared with the ICRP s current model for nickel, the proposed model is based on a larger set of observations of the behavior of nickel in human subjects and laboratory animals and provides a more realistic description of the paths of movement of nickel in the body. For the two most important radioisotopes of nickel, 59Ni andmore » 63Ni, the proposed model yields substantially lower dose estimates per unit of activity reaching blood than the current ICRP model.« less

  6. A biokinetic model for systemic nickel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melo, Dunstana; Leggett, Richard Wayne

    The International Commission on Radiological Protection (ICRP) is updating its suite of reference biokinetic models for internally deposited radionuclides. This paper reviews data for nickel and proposes an updated biokinetic model for systemic (absorbed) nickel in adult humans for use in radiation protection. Compared with the ICRP s current model for nickel, the proposed model is based on a larger set of observations of the behavior of nickel in human subjects and laboratory animals and provides a more realistic description of the paths of movement of nickel in the body. For the two most important radioisotopes of nickel, 59Ni andmore » 63Ni, the proposed model yields substantially lower dose estimates per unit of activity reaching blood than the current ICRP model.« less

  7. An Extended Damage Plasticity Model for Shotcrete: Formulation and Comparison with Other Shotcrete Models

    PubMed Central

    Neuner, Matthias; Gamnitzer, Peter; Hofstetter, Günter

    2017-01-01

    The aims of the present paper are (i) to briefly review single-field and multi-field shotcrete models proposed in the literature; (ii) to propose the extension of a damage-plasticity model for concrete to shotcrete; and (iii) to evaluate the capabilities of the proposed extended damage-plasticity model for shotcrete by comparing the predicted response with experimental data for shotcrete and with the response predicted by shotcrete models, available in the literature. The results of the evaluation will be used for recommendations concerning the application and further improvements of the investigated shotcrete models and they will serve as a basis for the design of a new lab test program, complementing the existing ones. PMID:28772445

  8. Rate-distortion analysis of dead-zone plus uniform threshold scalar quantization and its application--part II: two-pass VBR coding for H.264/AVC.

    PubMed

    Sun, Jun; Duan, Yizhou; Li, Jiangtao; Liu, Jiaying; Guo, Zongming

    2013-01-01

    In the first part of this paper, we derive a source model describing the relationship between the rate, distortion, and quantization steps of the dead-zone plus uniform threshold scalar quantizers with nearly uniform reconstruction quantizers for generalized Gaussian distribution. This source model consists of rate-quantization, distortion-quantization (D-Q), and distortion-rate (D-R) models. In this part, we first rigorously confirm the accuracy of the proposed source model by comparing the calculated results with the coding data of JM 16.0. Efficient parameter estimation strategies are then developed to better employ this source model in our two-pass rate control method for H.264 variable bit rate coding. Based on our D-Q and D-R models, the proposed method is of high stability, low complexity and is easy to implement. Extensive experiments demonstrate that the proposed method achieves: 1) average peak signal-to-noise ratio variance of only 0.0658 dB, compared to 1.8758 dB of JM 16.0's method, with an average rate control error of 1.95% and 2) significant improvement in smoothing the video quality compared with the latest two-pass rate control method.

  9. Mutual Comparative Filtering for Change Detection in Videos with Unstable Illumination Conditions

    NASA Astrophysics Data System (ADS)

    Sidyakin, Sergey V.; Vishnyakov, Boris V.; Vizilter, Yuri V.; Roslov, Nikolay I.

    2016-06-01

    In this paper we propose a new approach for change detection and moving objects detection in videos with unstable, abrupt illumination changes. This approach is based on mutual comparative filters and background normalization. We give the definitions of mutual comparative filters and outline their strong advantage for change detection purposes. Presented approach allows us to deal with changing illumination conditions in a simple and efficient way and does not have drawbacks, which exist in models that assume different color transformation laws. The proposed procedure can be used to improve a number of background modelling methods, which are not specifically designed to work under illumination changes.

  10. Thermodynamics-based models of transcriptional regulation with gene sequence.

    PubMed

    Wang, Shuqiang; Shen, Yanyan; Hu, Jinxing

    2015-12-01

    Quantitative models of gene regulatory activity have the potential to improve our mechanistic understanding of transcriptional regulation. However, the few models available today have been based on simplistic assumptions about the sequences being modeled or heuristic approximations of the underlying regulatory mechanisms. In this work, we have developed a thermodynamics-based model to predict gene expression driven by any DNA sequence. The proposed model relies on a continuous time, differential equation description of transcriptional dynamics. The sequence features of the promoter are exploited to derive the binding affinity which is derived based on statistical molecular thermodynamics. Experimental results show that the proposed model can effectively identify the activity levels of transcription factors and the regulatory parameters. Comparing with the previous models, the proposed model can reveal more biological sense.

  11. An instrumental electrode model for solving EIT forward problems.

    PubMed

    Zhang, Weida; Li, David

    2014-10-01

    An instrumental electrode model (IEM) capable of describing the performance of electrical impedance tomography (EIT) systems in the MHz frequency range has been proposed. Compared with the commonly used Complete Electrode Model (CEM), which assumes ideal front-end interfaces, the proposed model considers the effects of non-ideal components in the front-end circuits. This introduces an extra boundary condition in the forward model and offers a more accurate modelling for EIT systems. We have demonstrated its performance using simple geometry structures and compared the results with the CEM and full Maxwell methods. The IEM can provide a significantly more accurate approximation than the CEM in the MHz frequency range, where the full Maxwell methods are favoured over the quasi-static approximation. The improved electrode model will facilitate the future characterization and front-end design of real-world EIT systems.

  12. Verification of an Analytical Method for Measuring Crystal Nucleation Rates in Glasses from DTA Data

    NASA Technical Reports Server (NTRS)

    Ranasinghe, K. S.; Wei, P. F.; Kelton, K. F.; Ray, C. S.; Day, D. E.

    2004-01-01

    A recently proposed analytical (DTA) method for estimating the nucleation rates in glasses has been evaluated by comparing experimental data with numerically computed nucleation rates for a model lithium disilicate glass. The time and temperature dependent nucleation rates were predicted using the model and compared with those values from an analysis of numerically calculated DTA curves. The validity of the numerical approach was demonstrated earlier by a comparison with experimental data. The excellent agreement between the nucleation rates from the model calculations and fiom the computer generated DTA data demonstrates the validity of the proposed analytical DTA method.

  13. Development of Vehicle Model Test for Road Loading Analysis of Sedan Model

    NASA Astrophysics Data System (ADS)

    Mohd Nor, M. K.; Noordin, A.; Ruzali, M. F. S.; Hussen, M. H.

    2016-11-01

    Simple Structural Surfaces (SSS) method is offered as a means of organizing the process for rationalizing the basic vehicle body structure load paths. The application of this simplified approach is highly beneficial in the design development of modern passenger car structure especially during the conceptual stage. In Malaysia, however, there is no real physical model of SSS available to gain considerable insight and understanding into the function of each major subassembly in the whole vehicle structures. Based on this motivation, a physical model of SSS for sedan model with the corresponding model vehicle tests of bending and torsion is proposed in this work. The proposed approach is relatively easy to understand as compared to Finite Element Method (FEM). The results show that the proposed vehicle model test is capable to show that satisfactory load paths can give a sufficient structural stiffness within the vehicle structure. It is clearly observed that the global bending stiffness reduce significantly when more panels are removed from a complete SSS model. It is identified that parcel shelf is an important subassembly to sustain bending load. The results also match with the theoretical hypothesis, as the stiffness of the structure in an open section condition is shown weak when subjected to torsion load compared to bending load. The proposed approach can potentially be integrated with FEM to speed up the design process of automotive vehicle.

  14. Ultra-Short-Term Wind Power Prediction Using a Hybrid Model

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.

  15. The Development of a Proposed Global Work-Integrated Learning Framework

    ERIC Educational Resources Information Center

    McRae, Norah; Johnston, Nancy

    2016-01-01

    Building on the work completed in BC that resulted in the development of a WIL Matrix for comparing and contrasting various forms of WIL with the Canadian co-op model, this paper proposes a Global Work-Integrated Learning Framework that allows for the comparison of a variety of models of work-integrated learning found in the international…

  16. Electrical Lumped Model Examination for Load Variation of Circulation System

    NASA Astrophysics Data System (ADS)

    Koya, Yoshiharu; Ito, Mitsuyo; Mizoshiri, Isao

    Modeling and analysis of the circulation system enables the characteristic decision of circulation system in the body to be made. So, many models of circulation system have been proposed. But, they are complicated because the models include a lot of elements. Therefore, we proposed a complete circulation model as a lumped electrical circuit, which is comparatively simple. In this paper, we examine the effectiveness of the complete circulation model as a lumped electrical circuit. We use normal, angina pectoris, dilated cardiomyopathy and myocardial infarction for evaluation of the ventricular contraction function.

  17. Right-sizing statistical models for longitudinal data.

    PubMed

    Wood, Phillip K; Steinley, Douglas; Jackson, Kristina M

    2015-12-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to "right-size" the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting, overly parsimonious models to more complex, better-fitting alternatives and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically underidentified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A 3-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation-covariation patterns. The orthogonal free curve slope intercept (FCSI) growth model is considered a general model that includes, as special cases, many models, including the factor mean (FM) model (McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, hierarchical linear models (HLMs), repeated-measures multivariate analysis of variance (MANOVA), and the linear slope intercept (linearSI) growth model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparing several candidate parametric growth and chronometric models in a Monte Carlo study. (c) 2015 APA, all rights reserved).

  18. Preliminary comparative assessment of PM10 hourly measurement results from new monitoring stations type using stochastic and exploratory methodology and models

    NASA Astrophysics Data System (ADS)

    Czechowski, Piotr Oskar; Owczarek, Tomasz; Badyda, Artur; Majewski, Grzegorz; Rogulski, Mariusz; Ogrodnik, Paweł

    2018-01-01

    The paper presents selected preliminary stage key issues proposed extended equivalence measurement results assessment for new portable devices - the comparability PM10 concentration results hourly series with reference station measurement results with statistical methods. In article presented new portable meters technical aspects. The emphasis was placed on the comparability the results using the stochastic and exploratory methods methodology concept. The concept is based on notice that results series simple comparability in the time domain is insufficient. The comparison of regularity should be done in three complementary fields of statistical modeling: time, frequency and space. The proposal is based on model's results of five annual series measurement results new mobile devices and WIOS (Provincial Environmental Protection Inspectorate) reference station located in Nowy Sacz city. The obtained results indicate both the comparison methodology completeness and the high correspondence obtained new measurements results devices with reference.

  19. Stochastic model simulation using Kronecker product analysis and Zassenhaus formula approximation.

    PubMed

    Caglar, Mehmet Umut; Pal, Ranadip

    2013-01-01

    Probabilistic Models are regularly applied in Genetic Regulatory Network modeling to capture the stochastic behavior observed in the generation of biological entities such as mRNA or proteins. Several approaches including Stochastic Master Equations and Probabilistic Boolean Networks have been proposed to model the stochastic behavior in genetic regulatory networks. It is generally accepted that Stochastic Master Equation is a fundamental model that can describe the system being investigated in fine detail, but the application of this model is computationally enormously expensive. On the other hand, Probabilistic Boolean Network captures only the coarse-scale stochastic properties of the system without modeling the detailed interactions. We propose a new approximation of the stochastic master equation model that is able to capture the finer details of the modeled system including bistabilities and oscillatory behavior, and yet has a significantly lower computational complexity. In this new method, we represent the system using tensors and derive an identity to exploit the sparse connectivity of regulatory targets for complexity reduction. The algorithm involves an approximation based on Zassenhaus formula to represent the exponential of a sum of matrices as product of matrices. We derive upper bounds on the expected error of the proposed model distribution as compared to the stochastic master equation model distribution. Simulation results of the application of the model to four different biological benchmark systems illustrate performance comparable to detailed stochastic master equation models but with considerably lower computational complexity. The results also demonstrate the reduced complexity of the new approach as compared to commonly used Stochastic Simulation Algorithm for equivalent accuracy.

  20. To Control False Positives in Gene-Gene Interaction Analysis: Two Novel Conditional Entropy-Based Approaches

    PubMed Central

    Lin, Meihua; Li, Haoli; Zhao, Xiaolei; Qin, Jiheng

    2013-01-01

    Genome-wide analysis of gene-gene interactions has been recognized as a powerful avenue to identify the missing genetic components that can not be detected by using current single-point association analysis. Recently, several model-free methods (e.g. the commonly used information based metrics and several logistic regression-based metrics) were developed for detecting non-linear dependence between genetic loci, but they are potentially at the risk of inflated false positive error, in particular when the main effects at one or both loci are salient. In this study, we proposed two conditional entropy-based metrics to challenge this limitation. Extensive simulations demonstrated that the two proposed metrics, provided the disease is rare, could maintain consistently correct false positive rate. In the scenarios for a common disease, our proposed metrics achieved better or comparable control of false positive error, compared to four previously proposed model-free metrics. In terms of power, our methods outperformed several competing metrics in a range of common disease models. Furthermore, in real data analyses, both metrics succeeded in detecting interactions and were competitive with the originally reported results or the logistic regression approaches. In conclusion, the proposed conditional entropy-based metrics are promising as alternatives to current model-based approaches for detecting genuine epistatic effects. PMID:24339984

  1. Intelligent multiagent coordination based on reinforcement hierarchical neuro-fuzzy models.

    PubMed

    Mendoza, Leonardo Forero; Vellasco, Marley; Figueiredo, Karla

    2014-12-01

    This paper presents the research and development of two hybrid neuro-fuzzy models for the hierarchical coordination of multiple intelligent agents. The main objective of the models is to have multiple agents interact intelligently with each other in complex systems. We developed two new models of coordination for intelligent multiagent systems, which integrates the Reinforcement Learning Hierarchical Neuro-Fuzzy model with two proposed coordination mechanisms: the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with a market-driven coordination mechanism (MA-RL-HNFP-MD) and the MultiAgent Reinforcement Learning Hierarchical Neuro-Fuzzy with graph coordination (MA-RL-HNFP-CG). In order to evaluate the proposed models and verify the contribution of the proposed coordination mechanisms, two multiagent benchmark applications were developed: the pursuit game and the robot soccer simulation. The results obtained demonstrated that the proposed coordination mechanisms greatly improve the performance of the multiagent system when compared with other strategies.

  2. QSAR modelling using combined simple competitive learning networks and RBF neural networks.

    PubMed

    Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E

    2018-04-01

    The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.

  3. Valiant load-balanced robust routing under hose model for WDM mesh networks

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoning; Li, Lemin; Wang, Sheng

    2006-09-01

    In this paper, we propose Valiant Load-Balanced robust routing scheme for WDM mesh networks under the model of polyhedral uncertainty (i.e., hose model), and the proposed routing scheme is implemented with traffic grooming approach. Our Objective is to maximize the hose model throughput. A mathematic formulation of Valiant Load-Balanced robust routing is presented and three fast heuristic algorithms are also proposed. When implementing Valiant Load-Balanced robust routing scheme to WDM mesh networks, a novel traffic-grooming algorithm called MHF (minimizing hop first) is proposed. We compare the three heuristic algorithms with the VPN tree under the hose model. Finally we demonstrate in the simulation results that MHF with Valiant Load-Balanced robust routing scheme outperforms the traditional traffic-grooming algorithm in terms of the throughput for the uniform/non-uniform traffic matrix under the hose model.

  4. A Global User-Driven Model for Tile Prefetching in Web Geographical Information Systems

    PubMed Central

    Pan, Shaoming; Chong, Yanwen; Zhang, Hang; Tan, Xicheng

    2017-01-01

    A web geographical information system is a typical service-intensive application. Tile prefetching and cache replacement can improve cache hit ratios by proactively fetching tiles from storage and replacing the appropriate tiles from the high-speed cache buffer without waiting for a client’s requests, which reduces disk latency and improves system access performance. Most popular prefetching strategies consider only the relative tile popularities to predict which tile should be prefetched or consider only a single individual user's access behavior to determine which neighbor tiles need to be prefetched. Some studies show that comprehensively considering all users’ access behaviors and all tiles’ relationships in the prediction process can achieve more significant improvements. Thus, this work proposes a new global user-driven model for tile prefetching and cache replacement. First, based on all users’ access behaviors, a type of expression method for tile correlation is designed and implemented. Then, a conditional prefetching probability can be computed based on the proposed correlation expression mode. Thus, some tiles to be prefetched can be found by computing and comparing the conditional prefetching probability from the uncached tiles set and, similarly, some replacement tiles can be found in the cache buffer according to multi-step prefetching. Finally, some experiments are provided comparing the proposed model with other global user-driven models, other single user-driven models, and other client-side prefetching strategies. The results show that the proposed model can achieve a prefetching hit rate in approximately 10.6% ~ 110.5% higher than the compared methods. PMID:28085937

  5. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  6. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  7. Shock Sensitivity of energetic materials

    NASA Technical Reports Server (NTRS)

    Kim, K.

    1980-01-01

    Viscoplastic deformation is examined as the principal source of hot energy. Some shock sensitivity data on a proposed model is explained. A hollow sphere model is used to approximate complex porous matrix of energetic materials. Two pieces of shock sensitivity data are qualitatively compared with results of the proposed model. The first is the p2 tau law. The second is the desensitization of energetic materials by a ramp wave applied stress. An approach to improve the model based on experimental observations is outlined.

  8. Modelling mixing within the dead space of the lung improves predictions of functional residual capacity.

    PubMed

    Harrison, Chris D; Phan, Phi Anh; Zhang, Cathy; Geer, Daniel; Farmery, Andrew D; Payne, Stephen J

    2017-08-01

    Routine estimation of functional residual capacity (FRC) in ventilated patients has been a long held goal, with many methods previously proposed, but none have been used in routine clinical practice. This paper proposes three models for determining FRC using the nitrous oxide concentration from the entire expired breath in order to improve the precision of the estimate. Of the three models proposed, a dead space with two mixing compartments provided the best results, reducing the mean limits of agreement with the FRC measured by whole body plethysmography by up to 41%. This moves away from traditional lung models, which do not account for mixing within the dead space. Compared to literature values for FRC, the results are similar to those obtained using helium dilution and better than the LUFU device (Dräger Medical, Lubeck, Germany), with significantly better limits of agreement compared to plethysmography. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. On the calculation of the complex wavenumber of plane waves in rigid-walled low-Mach-number turbulent pipe flows

    NASA Astrophysics Data System (ADS)

    Weng, Chenyang; Boij, Susann; Hanifi, Ardeshir

    2015-10-01

    A numerical method for calculating the wavenumbers of axisymmetric plane waves in rigid-walled low-Mach-number turbulent flows is proposed, which is based on solving the linearized Navier-Stokes equations with an eddy-viscosity model. In addition, theoretical models for the wavenumbers are reviewed, and the main effects (the viscothermal effects, the mean flow convection and refraction effects, the turbulent absorption, and the moderate compressibility effects) which may influence the sound propagation are discussed. Compared to the theoretical models, the proposed numerical method has the advantage of potentially including more effects in the computed wavenumbers. The numerical results of the wavenumbers are compared with the reviewed theoretical models, as well as experimental data from the literature. It shows that the proposed numerical method can give satisfactory prediction of both the real part (phase shift) and the imaginary part (attenuation) of the measured wavenumbers, especially when the refraction effects or the turbulent absorption effects become important.

  10. Augmented twin-nonlinear two-box behavioral models for multicarrier LTE power amplifiers.

    PubMed

    Hammi, Oualid

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients.

  11. A Historical Forcing Ice Sheet Model Validation Framework for Greenland

    NASA Astrophysics Data System (ADS)

    Price, S. F.; Hoffman, M. J.; Howat, I. M.; Bonin, J. A.; Chambers, D. P.; Kalashnikova, I.; Neumann, T.; Nowicki, S.; Perego, M.; Salinger, A.

    2014-12-01

    We propose an ice sheet model testing and validation framework for Greenland for the years 2000 to the present. Following Perego et al. (2014), we start with a realistic ice sheet initial condition that is in quasi-equilibrium with climate forcing from the late 1990's. This initial condition is integrated forward in time while simultaneously applying (1) surface mass balance forcing (van Angelen et al., 2013) and (2) outlet glacier flux anomalies, defined using a new dataset of Greenland outlet glacier flux for the past decade (Enderlin et al., 2014). Modeled rates of mass and elevation change are compared directly to remote sensing observations obtained from GRACE and ICESat. Here, we present a detailed description of the proposed validation framework including the ice sheet model and model forcing approach, the model-to-observation comparison process, and initial results comparing model output and observations for the time period 2000-2013.

  12. Modeling nonlinearities in MEMS oscillators.

    PubMed

    Agrawal, Deepak K; Woodhouse, Jim; Seshia, Ashwin A

    2013-08-01

    We present a mathematical model of a microelectromechanical system (MEMS) oscillator that integrates the nonlinearities of the MEMS resonator and the oscillator circuitry in a single numerical modeling environment. This is achieved by transforming the conventional nonlinear mechanical model into the electrical domain while simultaneously considering the prominent nonlinearities of the resonator. The proposed nonlinear electrical model is validated by comparing the simulated amplitude-frequency response with measurements on an open-loop electrically addressed flexural silicon MEMS resonator driven to large motional amplitudes. Next, the essential nonlinearities in the oscillator circuit are investigated and a mathematical model of a MEMS oscillator is proposed that integrates the nonlinearities of the resonator. The concept is illustrated for MEMS transimpedance-amplifier- based square-wave and sine-wave oscillators. Closed-form expressions of steady-state output power and output frequency are derived for both oscillator models and compared with experimental and simulation results, with a good match in the predicted trends in all three cases.

  13. Structural model of the open–closed–inactivated cycle of prokaryotic voltage-gated sodium channels

    PubMed Central

    Bagnéris, Claire; Naylor, Claire E.; McCusker, Emily C.

    2015-01-01

    In excitable cells, the initiation of the action potential results from the opening of voltage-gated sodium channels. These channels undergo a series of conformational changes between open, closed, and inactivated states. Many models have been proposed for the structural transitions that result in these different functional states. Here, we compare the crystal structures of prokaryotic sodium channels captured in the different conformational forms and use them as the basis for examining molecular models for the activation, slow inactivation, and recovery processes. We compare structural similarities and differences in the pore domains, specifically in the transmembrane helices, the constrictions within the pore cavity, the activation gate at the cytoplasmic end of the last transmembrane helix, the C-terminal domain, and the selectivity filter. We discuss the observed differences in the context of previous models for opening, closing, and inactivation, and present a new structure-based model for the functional transitions. Our proposed prokaryotic channel activation mechanism is then compared with the activation transition in eukaryotic sodium channels. PMID:25512599

  14. Non-stationary Bias Correction of Monthly CMIP5 Temperature Projections over China using a Residual-based Bagging Tree Model

    NASA Astrophysics Data System (ADS)

    Yang, T.; Lee, C.

    2017-12-01

    The biases in the Global Circulation Models (GCMs) are crucial for understanding future climate changes. Currently, most bias correction methodologies suffer from the assumption that model bias is stationary. This paper provides a non-stationary bias correction model, termed Residual-based Bagging Tree (RBT) model, to reduce simulation biases and to quantify the contributions of single models. Specifically, the proposed model estimates the residuals between individual models and observations, and takes the differences between observations and the ensemble mean into consideration during the model training process. A case study is conducted for 10 major river basins in Mainland China during different seasons. Results show that the proposed model is capable of providing accurate and stable predictions while including the non-stationarities into the modeling framework. Significant reductions in both bias and root mean squared error are achieved with the proposed RBT model, especially for the central and western parts of China. The proposed RBT model has consistently better performance in reducing biases when compared to the raw ensemble mean, the ensemble mean with simple additive bias correction, and the single best model for different seasons. Furthermore, the contribution of each single GCM in reducing the overall bias is quantified. The single model importance varies between 3.1% and 7.2%. For different future scenarios (RCP 2.6, RCP 4.5, and RCP 8.5), the results from RBT model suggest temperature increases of 1.44 ºC, 2.59 ºC, and 4.71 ºC by the end of the century, respectively, when compared to the average temperature during 1970 - 1999.

  15. Nonlinear Tracking Control of a Conductive Supercoiled Polymer Actuator.

    PubMed

    Luong, Tuan Anh; Cho, Kyeong Ho; Song, Min Geun; Koo, Ja Choon; Choi, Hyouk Ryeol; Moon, Hyungpil

    2018-04-01

    Artificial muscle actuators made from commercial nylon fishing lines have been recently introduced and shown as a new type of actuator with high performance. However, the actuators also exhibit significant nonlinearities, which make them difficult to control, especially in precise trajectory-tracking applications. In this article, we present a nonlinear mathematical model of a conductive supercoiled polymer (SCP) actuator driven by Joule heating for model-based feedback controls. Our efforts include modeling of the hysteresis behavior of the actuator. Based on nonlinear modeling, we design a sliding mode controller for SCP actuator-driven manipulators. The system with proposed control law is proven to be asymptotically stable using the Lyapunov theory. The control performance of the proposed method is evaluated experimentally and compared with that of a proportional-integral-derivative (PID) controller through one-degree-of-freedom SCP actuator-driven manipulators. Experimental results show that the proposed controller's performance is superior to that of a PID controller, such as the tracking errors are nearly 10 times smaller compared with those of a PID controller, and it is more robust to external disturbances such as sensor noise and actuator modeling error.

  16. A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield

    NASA Astrophysics Data System (ADS)

    Kazama, Yoriko; Kujirai, Toshihiro

    2014-10-01

    A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.

  17. Nonlinear system modeling based on bilinear Laguerre orthonormal bases.

    PubMed

    Garna, Tarek; Bouzrara, Kais; Ragot, José; Messaoud, Hassani

    2013-05-01

    This paper proposes a new representation of discrete bilinear model by developing its coefficients associated to the input, to the output and to the crossed product on three independent Laguerre orthonormal bases. Compared to classical bilinear model, the resulting model entitled bilinear-Laguerre model ensures a significant parameter number reduction as well as simple recursive representation. However, such reduction still constrained by an optimal choice of Laguerre pole characterizing each basis. To do so, we develop a pole optimization algorithm which constitutes an extension of that proposed by Tanguy et al.. The bilinear-Laguerre model as well as the proposed pole optimization algorithm are illustrated and tested on a numerical simulations and validated on the Continuous Stirred Tank Reactor (CSTR) System. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. Extracting TSK-type Neuro-Fuzzy model using the Hunting search algorithm

    NASA Astrophysics Data System (ADS)

    Bouzaida, Sana; Sakly, Anis; M'Sahli, Faouzi

    2014-01-01

    This paper proposes a Takagi-Sugeno-Kang (TSK) type Neuro-Fuzzy model tuned by a novel metaheuristic optimization algorithm called Hunting Search (HuS). The HuS algorithm is derived based on a model of group hunting of animals such as lions, wolves, and dolphins when looking for a prey. In this study, the structure and parameters of the fuzzy model are encoded into a particle. Thus, the optimal structure and parameters are achieved simultaneously. The proposed method was demonstrated through modeling and control problems, and the results have been compared with other optimization techniques. The comparisons indicate that the proposed method represents a powerful search approach and an effective optimization technique as it can extract the accurate TSK fuzzy model with an appropriate number of rules.

  19. Hybrid active contour model for inhomogeneous image segmentation with background estimation

    NASA Astrophysics Data System (ADS)

    Sun, Kaiqiong; Li, Yaqin; Zeng, Shan; Wang, Jun

    2018-03-01

    This paper proposes a hybrid active contour model for inhomogeneous image segmentation. The data term of the energy function in the active contour consists of a global region fitting term in a difference image and a local region fitting term in the original image. The difference image is obtained by subtracting the background from the original image. The background image is dynamically estimated from a linear filtered result of the original image on the basis of the varying curve locations during the active contour evolution process. As in existing local models, fitting the image to local region information makes the proposed model robust against an inhomogeneous background and maintains the accuracy of the segmentation result. Furthermore, fitting the difference image to the global region information makes the proposed model robust against the initial contour location, unlike existing local models. Experimental results show that the proposed model can obtain improved segmentation results compared with related methods in terms of both segmentation accuracy and initial contour sensitivity.

  20. A 3D model retrieval approach based on Bayesian networks lightfield descriptor

    NASA Astrophysics Data System (ADS)

    Xiao, Qinhan; Li, Yanjun

    2009-12-01

    A new 3D model retrieval methodology is proposed by exploiting a novel Bayesian networks lightfield descriptor (BNLD). There are two key novelties in our approach: (1) a BN-based method for building lightfield descriptor; and (2) a 3D model retrieval scheme based on the proposed BNLD. To overcome the disadvantages of the existing 3D model retrieval methods, we explore BN for building a new lightfield descriptor. Firstly, 3D model is put into lightfield, about 300 binary-views can be obtained along a sphere, then Fourier descriptors and Zernike moments descriptors can be calculated out from binaryviews. Then shape feature sequence would be learned into a BN model based on BN learning algorithm; Secondly, we propose a new 3D model retrieval method by calculating Kullback-Leibler Divergence (KLD) between BNLDs. Beneficial from the statistical learning, our BNLD is noise robustness as compared to the existing methods. The comparison between our method and the lightfield descriptor-based approach is conducted to demonstrate the effectiveness of our proposed methodology.

  1. An efficient sequential strategy for realizing cross-gradient joint inversion: method and its application to 2-D cross borehole seismic traveltime and DC resistivity tomography

    NASA Astrophysics Data System (ADS)

    Gao, Ji; Zhang, Haijiang

    2018-05-01

    Cross-gradient joint inversion that enforces structural similarity between different models has been widely utilized in jointly inverting different geophysical data types. However, it is a challenge to combine different geophysical inversion systems with the cross-gradient structural constraint into one joint inversion system because they may differ greatly in the model representation, forward modelling and inversion algorithm. Here we propose a new joint inversion strategy that can avoid this issue. Different models are separately inverted using the existing inversion packages and model structure similarity is only enforced through cross-gradient minimization between two models after each iteration. Although the data fitting and structural similarity enforcing processes are decoupled, our proposed strategy is still able to choose appropriate models to balance the trade-off between geophysical data fitting and structural similarity. This is realized by using model perturbations from separate data inversions to constrain the cross-gradient minimization process. We have tested this new strategy on 2-D cross borehole synthetic seismic traveltime and DC resistivity data sets. Compared to separate geophysical inversions, our proposed joint inversion strategy fits the separate data sets at comparable levels while at the same time resulting in a higher structural similarity between the velocity and resistivity models.

  2. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    NASA Astrophysics Data System (ADS)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  3. Conceptual model of iCAL4LA: Proposing the components using comparative analysis

    NASA Astrophysics Data System (ADS)

    Ahmad, Siti Zulaiha; Mutalib, Ariffin Abdul

    2016-08-01

    This paper discusses an on-going study that initiates an initial process in determining the common components for a conceptual model of interactive computer-assisted learning that is specifically designed for low achieving children. This group of children needs a specific learning support that can be used as an alternative learning material in their learning environment. In order to develop the conceptual model, this study extracts the common components from 15 strongly justified computer assisted learning studies. A comparative analysis has been conducted to determine the most appropriate components by using a set of specific indication classification to prioritize the applicability. The results of the extraction process reveal 17 common components for consideration. Later, based on scientific justifications, 16 of them were selected as the proposed components for the model.

  4. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  5. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  6. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  7. A visual model for object detection based on active contours and level-set method.

    PubMed

    Satoh, Shunji

    2006-09-01

    A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.

  8. Augmented Twin-Nonlinear Two-Box Behavioral Models for Multicarrier LTE Power Amplifiers

    PubMed Central

    2014-01-01

    A novel class of behavioral models is proposed for LTE-driven Doherty power amplifiers with strong memory effects. The proposed models, labeled augmented twin-nonlinear two-box models, are built by cascading a highly nonlinear memoryless function with a mildly nonlinear memory polynomial with cross terms. Experimental validation on gallium nitride based Doherty power amplifiers illustrates the accuracy enhancement and complexity reduction achieved by the proposed models. When strong memory effects are observed, the augmented twin-nonlinear two-box models can improve the normalized mean square error by up to 3 dB for the same number of coefficients when compared to state-of-the-art twin-nonlinear two-box models. Furthermore, the augmented twin-nonlinear two-box models lead to the same performance as previously reported twin-nonlinear two-box models while requiring up to 80% less coefficients. PMID:24624047

  9. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results.

    PubMed

    Humada, Ali M; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M; Ahmed, Mushtaq N

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions.

  10. Photovoltaic Grid-Connected Modeling and Characterization Based on Experimental Results

    PubMed Central

    Humada, Ali M.; Hojabri, Mojgan; Sulaiman, Mohd Herwan Bin; Hamada, Hussein M.; Ahmed, Mushtaq N.

    2016-01-01

    A grid-connected photovoltaic (PV) system operates under fluctuated weather condition has been modeled and characterized based on specific test bed. A mathematical model of a small-scale PV system has been developed mainly for residential usage, and the potential results have been simulated. The proposed PV model based on three PV parameters, which are the photocurrent, IL, the reverse diode saturation current, Io, the ideality factor of diode, n. Accuracy of the proposed model and its parameters evaluated based on different benchmarks. The results showed that the proposed model fitting the experimental results with high accuracy compare to the other models, as well as the I-V characteristic curve. The results of this study can be considered valuable in terms of the installation of a grid-connected PV system in fluctuated climatic conditions. PMID:27035575

  11. Modeling pedestrian shopping behavior using principles of bounded rationality: model comparison and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Timmermans, Harry

    2011-06-01

    Models of geographical choice behavior have been dominantly based on rational choice models, which assume that decision makers are utility-maximizers. Rational choice models may be less appropriate as behavioral models when modeling decisions in complex environments in which decision makers may simplify the decision problem using heuristics. Pedestrian behavior in shopping streets is an example. We therefore propose a modeling framework for pedestrian shopping behavior incorporating principles of bounded rationality. We extend three classical heuristic rules (conjunctive, disjunctive and lexicographic rule) by introducing threshold heterogeneity. The proposed models are implemented using data on pedestrian behavior in Wang Fujing Street, the city center of Beijing, China. The models are estimated and compared with multinomial logit models and mixed logit models. Results show that the heuristic models are the best for all the decisions that are modeled. Validation tests are carried out through multi-agent simulation by comparing simulated spatio-temporal agent behavior with the observed pedestrian behavior. The predictions of heuristic models are slightly better than those of the multinomial logit models.

  12. Reproducing the nonlinear dynamic behavior of a structured beam with a generalized continuum model

    NASA Astrophysics Data System (ADS)

    Vila, J.; Fernández-Sáez, J.; Zaera, R.

    2018-04-01

    In this paper we study the coupled axial-transverse nonlinear vibrations of a kind of one dimensional structured solids by application of the so called Inertia Gradient Nonlinear continuum model. To show the accuracy of this axiomatic model, previously proposed by the authors, its predictions are compared with numeric results from a previously defined finite discrete chain of lumped masses and springs, for several number of particles. A continualization of the discrete model equations based on Taylor series allowed us to set equivalent values of the mechanical properties in both discrete and axiomatic continuum models. Contrary to the classical continuum model, the inertia gradient nonlinear continuum model used herein is able to capture scale effects, which arise for modes in which the wavelength is comparable to the characteristic distance of the structured solid. The main conclusion of the work is that the proposed generalized continuum model captures the scale effects in both linear and nonlinear regimes, reproducing the behavior of the 1D nonlinear discrete model adequately.

  13. Large eddy simulation of piloted pulverised coal combustion using extended flamelet/progress variable model

    NASA Astrophysics Data System (ADS)

    Wen, Xu; Luo, Kun; Jin, Hanhui; Fan, Jianren

    2017-09-01

    An extended flamelet/progress variable (EFPV) model for simulating pulverised coal combustion (PCC) in the context of large eddy simulation (LES) is proposed, in which devolatilisation, char surface reaction and radiation are all taken into account. The pulverised coal particles are tracked in the Lagrangian framework with various sub-models and the sub-grid scale (SGS) effects of turbulent velocity and scalar fluctuations on the coal particles are modelled by the velocity-scalar joint filtered density function (VSJFDF) model. The presented model is then evaluated by LES of an experimental piloted coal jet flame and comparing the numerical results with the experimental data and the results from the eddy break up (EBU) model. Detailed quantitative comparisons are carried out. It is found that the proposed model performs much better than the EBU model on radial velocity and species concentrations predictions. Comparing against the adiabatic counterpart, we find that the predicted temperature is evidently lowered and agrees well with the experimental data if the conditional sampling method is adopted.

  14. Efficiently modelling urban heat storage: an interface conduction scheme in an urban land surface model (aTEB v2.0)

    NASA Astrophysics Data System (ADS)

    Lipson, Mathew J.; Hart, Melissa A.; Thatcher, Marcus

    2017-03-01

    Intercomparison studies of models simulating the partitioning of energy over urban land surfaces have shown that the heat storage term is often poorly represented. In this study, two implicit discrete schemes representing heat conduction through urban materials are compared. We show that a well-established method of representing conduction systematically underestimates the magnitude of heat storage compared with exact solutions of one-dimensional heat transfer. We propose an alternative method of similar complexity that is better able to match exact solutions at typically employed resolutions. The proposed interface conduction scheme is implemented in an urban land surface model and its impact assessed over a 15-month observation period for a site in Melbourne, Australia, resulting in improved overall model performance for a variety of common material parameter choices and aerodynamic heat transfer parameterisations. The proposed scheme has the potential to benefit land surface models where computational constraints require a high level of discretisation in time and space, for example at neighbourhood/city scales, and where realistic material properties are preferred, for example in studies investigating impacts of urban planning changes.

  15. A recurrent neural network for solving bilevel linear programming problem.

    PubMed

    He, Xing; Li, Chuandong; Huang, Tingwen; Li, Chaojie; Huang, Junjian

    2014-04-01

    In this brief, based on the method of penalty functions, a recurrent neural network (NN) modeled by means of a differential inclusion is proposed for solving the bilevel linear programming problem (BLPP). Compared with the existing NNs for BLPP, the model has the least number of state variables and simple structure. Using nonsmooth analysis, the theory of differential inclusions, and Lyapunov-like method, the equilibrium point sequence of the proposed NNs can approximately converge to an optimal solution of BLPP under certain conditions. Finally, the numerical simulations of a supply chain distribution model have shown excellent performance of the proposed recurrent NNs.

  16. A magneto-rheological fluid mount featuring squeeze mode: analysis and testing

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Bai, Xian-Xu; Qian, Li-Jun; Choi, Seung-Bok

    2016-05-01

    This paper presents a mathematical model for a new semi-active vehicle engine mount utilizing magneto-rheological (MR) fluids in squeeze mode (MR mount in short) and validates the model by comparing analysis results with experimental tests. The proposed MR mount is mainly comprised of a frame for installation, a main rubber, a squeeze plate and a bobbin for coil winding. When the magnetic fields on, MR effect occurs in the upper gap between the squeeze plate and the bobbin, and the dynamic stiffness can be controlled by tuning the applied currents. Employing Bingham model and flow properties between parallel plates of MR fluids, a mathematical model for the squeeze type of MR mount is formulated with consideration of the fluid inertia, MR effect and hysteresis property. The field-dependent dynamic stiffness of the MR mount is then analyzed using the established mathematical model. Subsequently, in order to validate the mathematical model, an appropriate size of MR mount is fabricated and tested. The field-dependent force and dynamic stiffness of the proposed MR mount are evaluated and compared between the model and experimental tests in both time and frequency domains to verify the model efficiency. In addition, it is shown that both the damping property and the stiffness property of the proposed MR mount can be simultaneously controlled.

  17. EMG-Based Estimation of Limb Movement Using Deep Learning With Recurrent Convolutional Neural Networks.

    PubMed

    Xia, Peng; Hu, Jie; Peng, Yinghong

    2017-10-25

    A novel model based on deep learning is proposed to estimate kinematic information for myoelectric control from multi-channel electromyogram (EMG) signals. The neural information of limb movement is embedded in EMG signals that are influenced by all kinds of factors. In order to overcome the negative effects of variability in signals, the proposed model employs the deep architecture combining convolutional neural networks (CNNs) and recurrent neural networks (RNNs). The EMG signals are transformed to time-frequency frames as the input to the model. The limb movement is estimated by the model that is trained with the gradient descent and backpropagation procedure. We tested the model for simultaneous and proportional estimation of limb movement in eight healthy subjects and compared it with support vector regression (SVR) and CNNs on the same data set. The experimental studies show that the proposed model has higher estimation accuracy and better robustness with respect to time. The combination of CNNs and RNNs can improve the model performance compared with using CNNs alone. The model of deep architecture is promising in EMG decoding and optimization of network structures can increase the accuracy and robustness. © 2017 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  18. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  19. Thermodynamics of protein folding using a modified Wako-Saitô-Muñoz-Eaton model.

    PubMed

    Tsai, Min-Yeh; Yuan, Jian-Min; Teranishi, Yoshiaki; Lin, Sheng Hsien

    2012-09-01

    Herein, we propose a modified version of the Wako-Saitô-Muñoz-Eaton (WSME) model. The proposed model introduces an empirical temperature parameter for the hypothetical structural units (i.e., foldons) in proteins to include site-dependent thermodynamic behavior. The thermodynamics for both our proposed model and the original WSME model were investigated. For a system with beta-hairpin topology, a mathematical treatment (contact-pair treatment) to facilitate the calculation of its partition function was developed. The results show that the proposed model provides better insight into the site-dependent thermodynamic behavior of the system, compared with the original WSME model. From this site-dependent point of view, the relationship between probe-dependent experimental results and model's thermodynamic predictions can be explained. The model allows for suggesting a general principle to identify foldon behavior. We also find that the backbone hydrogen bonds may play a role of structural constraints in modulating the cooperative system. Thus, our study may contribute to the understanding of the fundamental principles for the thermodynamics of protein folding.

  20. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  1. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  2. Innovative methods for calculation of freeway travel time using limited data : final report.

    DOT National Transportation Integrated Search

    2008-01-01

    Description: Travel time estimations created by processing of simulated freeway loop detector data using proposed method have been compared with travel times reported from VISSIM model. An improved methodology was proposed to estimate freeway corrido...

  3. Comparative Costs of Manpower Education: A Methodological Study.

    ERIC Educational Resources Information Center

    Lyman, Jay Rich

    The objective of this study was to establish a criteria and model for comparative evaluation of manpower educational programs. The criteria developed deals with resource allocation in manpower education programs and how well those programs meet the needs of industry. In the proposed model, an occupation is reduced to its basic skills, which are…

  4. Numerical modeling of dynamics of heart rate and arterial pressure during passive orthostatic test

    NASA Astrophysics Data System (ADS)

    Ishbulatov, Yu. M.; Kiselev, A. R.; Karavaev, A. S.

    2018-04-01

    A model of human cardiovascular system is proposed to describe the main heart rhythm, influence of autonomous regulation on frequency and strength of heart contractions and resistance of arterial vessels; process of formation of arterial pressure during systolic and diastolic phases; influence of respiration; synchronization between loops of autonomous regulation. The proposed model is used to simulate the dynamics of heart rate and arterial pressure during passive transition from supine to upright position. Results of mathematical modeling are compared to original experimental data.

  5. Numerical Analysis of Modeling Based on Improved Elman Neural Network

    PubMed Central

    Jie, Shao

    2014-01-01

    A modeling based on the improved Elman neural network (IENN) is proposed to analyze the nonlinear circuits with the memory effect. The hidden layer neurons are activated by a group of Chebyshev orthogonal basis functions instead of sigmoid functions in this model. The error curves of the sum of squared error (SSE) varying with the number of hidden neurons and the iteration step are studied to determine the number of the hidden layer neurons. Simulation results of the half-bridge class-D power amplifier (CDPA) with two-tone signal and broadband signals as input have shown that the proposed behavioral modeling can reconstruct the system of CDPAs accurately and depict the memory effect of CDPAs well. Compared with Volterra-Laguerre (VL) model, Chebyshev neural network (CNN) model, and basic Elman neural network (BENN) model, the proposed model has better performance. PMID:25054172

  6. Multiview road sign detection via self-adaptive color model and shape context matching

    NASA Astrophysics Data System (ADS)

    Liu, Chunsheng; Chang, Faliang; Liu, Chengyun

    2016-09-01

    The multiview appearance of road signs in uncontrolled environments has made the detection of road signs a challenging problem in computer vision. We propose a road sign detection method to detect multiview road signs. This method is based on several algorithms, including the classical cascaded detector, the self-adaptive weighted Gaussian color model (SW-Gaussian model), and a shape context matching method. The classical cascaded detector is used to detect the frontal road signs in video sequences and obtain the parameters for the SW-Gaussian model. The proposed SW-Gaussian model combines the two-dimensional Gaussian model and the normalized red channel together, which can largely enhance the contrast between the red signs and background. The proposed shape context matching method can match shapes with big noise, which is utilized to detect road signs in different directions. The experimental results show that compared with previous detection methods, the proposed multiview detection method can reach higher detection rate in detecting signs with different directions.

  7. Research on regularized mean-variance portfolio selection strategy with modified Roy safety-first principle.

    PubMed

    Atta Mills, Ebenezer Fiifi Emire; Yan, Dawen; Yu, Bo; Wei, Xinyuan

    2016-01-01

    We propose a consolidated risk measure based on variance and the safety-first principle in a mean-risk portfolio optimization framework. The safety-first principle to financial portfolio selection strategy is modified and improved. Our proposed models are subjected to norm regularization to seek near-optimal stable and sparse portfolios. We compare the cumulative wealth of our preferred proposed model to a benchmark, S&P 500 index for the same period. Our proposed portfolio strategies have better out-of-sample performance than the selected alternative portfolio rules in literature and control the downside risk of the portfolio returns.

  8. Local Intrinsic Dimension Estimation by Generalized Linear Modeling.

    PubMed

    Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru

    2017-07-01

    We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.

  9. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    PubMed

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Unsaturated consolidation theory for the prediction of long-term municipal solid waste landfill settlement.

    PubMed

    Liu, Chia-Nan; Chen, Rong-Her; Chen, Kuo-Sheng

    2006-02-01

    The understanding of long-term landfill settlement is important for landfill design and rehabilitation. However, suitable models that can consider both the mechanical and biodecomposition mechanisms in predicting the long-term landfill settlement are generally not available. In this paper, a model based on unsaturated consolidation theory and considering the biodegradation process is introduced to simulate the landfill settlement behaviour. The details of problem formulations and the derivation of the solution for the formulated differential equation of gas pressure are presented. A step-by-step analytical procedure employing this approach for estimating settlement is proposed. The proposed model can generally model the typical features of short-term and long-term behaviour. The proposed model also yields results that are comparable with the field measurements.

  11. Design of a hybrid model for cardiac arrhythmia classification based on Daubechies wavelet transform.

    PubMed

    Rajagopal, Rekha; Ranganathan, Vidhyapriya

    2018-06-05

    Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.

  12. Comparative modeling without implicit sequence alignments.

    PubMed

    Kolinski, Andrzej; Gront, Dominik

    2007-10-01

    The number of known protein sequences is about thousand times larger than the number of experimentally solved 3D structures. For more than half of the protein sequences a close or distant structural analog could be identified. The key starting point in a classical comparative modeling is to generate the best possible sequence alignment with a template or templates. With decreasing sequence similarity, the number of errors in the alignments increases and these errors are the main causes of the decreasing accuracy of the molecular models generated. Here we propose a new approach to comparative modeling, which does not require the implicit alignment - the model building phase explores geometric, evolutionary and physical properties of a template (or templates). The proposed method requires prior identification of a template, although the initial sequence alignment is ignored. The model is built using a very efficient reduced representation search engine CABS to find the best possible superposition of the query protein onto the template represented as a 3D multi-featured scaffold. The criteria used include: sequence similarity, predicted secondary structure consistency, local geometric features and hydrophobicity profile. For more difficult cases, the new method qualitatively outperforms existing schemes of comparative modeling. The algorithm unifies de novo modeling, 3D threading and sequence-based methods. The main idea is general and could be easily combined with other efficient modeling tools as Rosetta, UNRES and others.

  13. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix.

    PubMed

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing

    2017-01-01

    Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.

  14. Comparing an Atomic Model or Structure to a Corresponding Cryo-electron Microscopy Image at the Central Axis of a Helix

    PubMed Central

    Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy

    2017-01-01

    Abstract Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4–7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map–model pairs. PMID:27936925

  15. Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation.

    PubMed

    Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul

    2018-04-01

    To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.

  16. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.

  17. Model-based inference for small area estimation with sampling weights

    PubMed Central

    Vandendijck, Y.; Faes, C.; Kirby, R.S.; Lawson, A.; Hens, N.

    2017-01-01

    Obtaining reliable estimates about health outcomes for areas or domains where only few to no samples are available is the goal of small area estimation (SAE). Often, we rely on health surveys to obtain information about health outcomes. Such surveys are often characterised by a complex design, stratification, and unequal sampling weights as common features. Hierarchical Bayesian models are well recognised in SAE as a spatial smoothing method, but often ignore the sampling weights that reflect the complex sampling design. In this paper, we focus on data obtained from a health survey where the sampling weights of the sampled individuals are the only information available about the design. We develop a predictive model-based approach to estimate the prevalence of a binary outcome for both the sampled and non-sampled individuals, using hierarchical Bayesian models that take into account the sampling weights. A simulation study is carried out to compare the performance of our proposed method with other established methods. The results indicate that our proposed method achieves great reductions in mean squared error when compared with standard approaches. It performs equally well or better when compared with more elaborate methods when there is a relationship between the responses and the sampling weights. The proposed method is applied to estimate asthma prevalence across districts. PMID:28989860

  18. Sliding-mode control combined with improved adaptive feedforward for wafer scanner

    NASA Astrophysics Data System (ADS)

    Li, Xiaojie; Wang, Yiguang

    2018-03-01

    In this paper, a sliding-mode control method combined with improved adaptive feedforward is proposed for wafer scanner to improve the tracking performance of the closed-loop system. Particularly, In addition to the inverse model, the nonlinear force ripple effect which may degrade the tracking accuracy of permanent magnet linear motor (PMLM) is considered in the proposed method. The dominant position periodicity of force ripple is determined by using the Fast Fourier Transform (FFT) analysis for experimental data and the improved feedforward control is achieved by the online recursive least-squares (RLS) estimation of the inverse model and the force ripple. The improved adaptive feedforward is given in a general form of nth-order model with force ripple effect. This proposed method is motivated by the motion controller design of the long-stroke PMLM and short-stroke voice coil motor for wafer scanner. The stability of the closed-loop control system and the convergence of the motion tracking are guaranteed by the proposed sliding-mode feedback and adaptive feedforward methods theoretically. Comparative experiments on a precision linear motion platform can verify the correctness and effectiveness of the proposed method. The experimental results show that comparing to traditional method the proposed one has better performance of rapidity and robustness, especially for high speed motion trajectory. And, the improvements on both tracking accuracy and settling time can be achieved.

  19. Blind prediction of natural video quality.

    PubMed

    Saad, Michele A; Bovik, Alan C; Charrier, Christophe

    2014-03-01

    We propose a blind (no reference or NR) video quality evaluation model that is nondistortion specific. The approach relies on a spatio-temporal model of video scenes in the discrete cosine transform domain, and on a model that characterizes the type of motion occurring in the scenes, to predict video quality. We use the models to define video statistics and perceptual features that are the basis of a video quality assessment (VQA) algorithm that does not require the presence of a pristine video to compare against in order to predict a perceptual quality score. The contributions of this paper are threefold. 1) We propose a spatio-temporal natural scene statistics (NSS) model for videos. 2) We propose a motion model that quantifies motion coherency in video scenes. 3) We show that the proposed NSS and motion coherency models are appropriate for quality assessment of videos, and we utilize them to design a blind VQA algorithm that correlates highly with human judgments of quality. The proposed algorithm, called video BLIINDS, is tested on the LIVE VQA database and on the EPFL-PoliMi video database and shown to perform close to the level of top performing reduced and full reference VQA algorithms.

  20. Bayesian inference for two-part mixed-effects model using skew distributions, with application to longitudinal semicontinuous alcohol data.

    PubMed

    Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie

    2017-08-01

    Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.

  1. A New Efficient Hybrid Intelligent Model for Biodegradation Process of DMP with Fuzzy Wavelet Neural Networks

    NASA Astrophysics Data System (ADS)

    Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong

    2017-01-01

    A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model.

  2. A New Efficient Hybrid Intelligent Model for Biodegradation Process of DMP with Fuzzy Wavelet Neural Networks

    PubMed Central

    Huang, Mingzhi; Zhang, Tao; Ruan, Jujun; Chen, Xiaohong

    2017-01-01

    A new efficient hybrid intelligent approach based on fuzzy wavelet neural network (FWNN) was proposed for effectively modeling and simulating biodegradation process of Dimethyl phthalate (DMP) in an anaerobic/anoxic/oxic (AAO) wastewater treatment process. With the self learning and memory abilities of neural networks (NN), handling uncertainty capacity of fuzzy logic (FL), analyzing local details superiority of wavelet transform (WT) and global search of genetic algorithm (GA), the proposed hybrid intelligent model can extract the dynamic behavior and complex interrelationships from various water quality variables. For finding the optimal values for parameters of the proposed FWNN, a hybrid learning algorithm integrating an improved genetic optimization and gradient descent algorithm is employed. The results show, compared with NN model (optimized by GA) and kinetic model, the proposed FWNN model have the quicker convergence speed, the higher prediction performance, and smaller RMSE (0.080), MSE (0.0064), MAPE (1.8158) and higher R2 (0.9851) values. which illustrates FWNN model simulates effluent DMP more accurately than the mechanism model. PMID:28120889

  3. Stacked Multilayer Self-Organizing Map for Background Modeling.

    PubMed

    Zhao, Zhenjie; Zhang, Xuebo; Fang, Yongchun

    2015-09-01

    In this paper, a new background modeling method called stacked multilayer self-organizing map background model (SMSOM-BM) is proposed, which presents several merits such as strong representative ability for complex scenarios, easy to use, and so on. In order to enhance the representative ability of the background model and make the parameters learned automatically, the recently developed idea of representative learning (or deep learning) is elegantly employed to extend the existing single-layer self-organizing map background model to a multilayer one (namely, the proposed SMSOM-BM). As a consequence, the SMSOM-BM gains several merits including strong representative ability to learn background model of challenging scenarios, and automatic determination for most network parameters. More specifically, every pixel is modeled by a SMSOM, and spatial consistency is considered at each layer. By introducing a novel over-layer filtering process, we can train the background model layer by layer in an efficient manner. Furthermore, for real-time performance consideration, we have implemented the proposed method using NVIDIA CUDA platform. Comparative experimental results show superior performance of the proposed approach.

  4. TPSLVM: a dimensionality reduction algorithm based on thin plate splines.

    PubMed

    Jiang, Xinwei; Gao, Junbin; Wang, Tianjiang; Shi, Daming

    2014-10-01

    Dimensionality reduction (DR) has been considered as one of the most significant tools for data analysis. One type of DR algorithms is based on latent variable models (LVM). LVM-based models can handle the preimage problem easily. In this paper we propose a new LVM-based DR model, named thin plate spline latent variable model (TPSLVM). Compared to the well-known Gaussian process latent variable model (GPLVM), our proposed TPSLVM is more powerful especially when the dimensionality of the latent space is low. Also, TPSLVM is robust to shift and rotation. This paper investigates two extensions of TPSLVM, i.e., the back-constrained TPSLVM (BC-TPSLVM) and TPSLVM with dynamics (TPSLVM-DM) as well as their combination BC-TPSLVM-DM. Experimental results show that TPSLVM and its extensions provide better data visualization and more efficient dimensionality reduction compared to PCA, GPLVM, ISOMAP, etc.

  5. Bayesian decision support for coding occupational injury data.

    PubMed

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  6. An interval programming model for continuous improvement in micro-manufacturing

    NASA Astrophysics Data System (ADS)

    Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun

    2018-03-01

    Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.

  7. Total Variation with Overlapping Group Sparsity for Image Deblurring under Impulse Noise

    PubMed Central

    Liu, Gang; Huang, Ting-Zhu; Liu, Jun; Lv, Xiao-Guang

    2015-01-01

    The total variation (TV) regularization method is an effective method for image deblurring in preserving edges. However, the TV based solutions usually have some staircase effects. In order to alleviate the staircase effects, we propose a new model for restoring blurred images under impulse noise. The model consists of an ℓ1-fidelity term and a TV with overlapping group sparsity (OGS) regularization term. Moreover, we impose a box constraint to the proposed model for getting more accurate solutions. The solving algorithm for our model is under the framework of the alternating direction method of multipliers (ADMM). We use an inner loop which is nested inside the majorization minimization (MM) iteration for the subproblem of the proposed method. Compared with other TV-based methods, numerical results illustrate that the proposed method can significantly improve the restoration quality, both in terms of peak signal-to-noise ratio (PSNR) and relative error (ReE). PMID:25874860

  8. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    PubMed Central

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802

  9. An Electromyographic-driven Musculoskeletal Torque Model using Neuro-Fuzzy System Identification: A Case Study

    PubMed Central

    Jafari, Zohreh; Edrisi, Mehdi; Marateb, Hamid Reza

    2014-01-01

    The purpose of this study was to estimate the torque from high-density surface electromyography signals of biceps brachii, brachioradialis, and the medial and lateral heads of triceps brachii muscles during moderate-to-high isometric elbow flexion-extension. The elbow torque was estimated in two following steps: First, surface electromyography (EMG) amplitudes were estimated using principal component analysis, and then a fuzzy model was proposed to illustrate the relationship between the EMG amplitudes and the measured torque signal. A neuro-fuzzy method, with which the optimum number of rules could be estimated, was used to identify the model with suitable complexity. Utilizing the proposed neuro-fuzzy model, the clinical interpretability was introduced; contrary to the previous linear and nonlinear black-box system identification models. It also reduced the estimation error compared with that of the most recent and accurate nonlinear dynamic model introduced in the literature. The optimum number of the rules for all trials was 4 ± 1, that might be related to motor control strategies and the % variance accounted for criterion was 96.40 ± 3.38 which in fact showed considerable improvement compared with the previous methods. The proposed method is thus a promising new tool for EMG-Torque modeling in clinical applications. PMID:25426427

  10. Surrogate based wind farm layout optimization using manifold mapping

    NASA Astrophysics Data System (ADS)

    Kaja Kamaludeen, Shaafi M.; van Zuijle, Alexander; Bijl, Hester

    2016-09-01

    High computational cost associated with the high fidelity wake models such as RANS or LES serves as a primary bottleneck to perform a direct high fidelity wind farm layout optimization (WFLO) using accurate CFD based wake models. Therefore, a surrogate based multi-fidelity WFLO methodology (SWFLO) is proposed. The surrogate model is built using an SBO method referred as manifold mapping (MM). As a verification, optimization of spacing between two staggered wind turbines was performed using the proposed surrogate based methodology and the performance was compared with that of direct optimization using high fidelity model. Significant reduction in computational cost was achieved using MM: a maximum computational cost reduction of 65%, while arriving at the same optima as that of direct high fidelity optimization. The similarity between the response of models, the number of mapping points and its position, highly influences the computational efficiency of the proposed method. As a proof of concept, realistic WFLO of a small 7-turbine wind farm is performed using the proposed surrogate based methodology. Two variants of Jensen wake model with different decay coefficients were used as the fine and coarse model. The proposed SWFLO method arrived at the same optima as that of the fine model with very less number of fine model simulations.

  11. An Investigation of Possible Hierarchical Dependency of Four Piaget-Type Tasks under Two Methods of Presentation to Third-, Fifth-, and Seventh-Grade Children.

    ERIC Educational Resources Information Center

    Phillips, Darrell Gordon

    The purpose of this study was to investigate a proposed model for the acquisition of the concept of displacement volume and to compare two methods of conservation task presentation. A 12-stage hierarchical model for the acquisition of the concept was proposed, based on four primary assumptions: (1) concept attainment can be measured by…

  12. Assessing Goodness of Fit in Item Response Theory with Nonparametric Models: A Comparison of Posterior Probabilities and Kernel-Smoothing Approaches

    ERIC Educational Resources Information Center

    Sueiro, Manuel J.; Abad, Francisco J.

    2011-01-01

    The distance between nonparametric and parametric item characteristic curves has been proposed as an index of goodness of fit in item response theory in the form of a root integrated squared error index. This article proposes to use the posterior distribution of the latent trait as the nonparametric model and compares the performance of an index…

  13. Problems With Risk Reclassification Methods for Evaluating Prediction Models

    PubMed Central

    Pepe, Margaret S.

    2011-01-01

    For comparing the performance of a baseline risk prediction model with one that includes an additional predictor, a risk reclassification analysis strategy has been proposed. The first step is to cross-classify risks calculated according to the 2 models for all study subjects. Summary measures including the percentage of reclassification and the percentage of correct reclassification are calculated, along with 2 reclassification calibration statistics. The author shows that interpretations of the proposed summary measures and P values are problematic. The author's recommendation is to display the reclassification table, because it shows interesting information, but to use alternative methods for summarizing and comparing model performance. The Net Reclassification Index has been suggested as one alternative method. The author argues for reporting components of the Net Reclassification Index because they are more clinically relevant than is the single numerical summary measure. PMID:21555714

  14. Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation

    PubMed Central

    Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.

    2013-01-01

    Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205

  15. Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models

    PubMed Central

    Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua

    2017-01-01

    In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862

  16. Medical image segmentation by combining graph cuts and oriented active appearance models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua

    2012-04-01

    In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.

  17. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine.

    PubMed

    Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.

  18. A Hybrid Short-Term Traffic Flow Prediction Model Based on Singular Spectrum Analysis and Kernel Extreme Learning Machine

    PubMed Central

    Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang

    2016-01-01

    Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829

  19. Quantifying and comparing dynamic predictive accuracy of joint models for longitudinal marker and time-to-event in presence of censoring and competing risks.

    PubMed

    Blanche, Paul; Proust-Lima, Cécile; Loubère, Lucie; Berr, Claudine; Dartigues, Jean-François; Jacqmin-Gadda, Hélène

    2015-03-01

    Thanks to the growing interest in personalized medicine, joint modeling of longitudinal marker and time-to-event data has recently started to be used to derive dynamic individual risk predictions. Individual predictions are called dynamic because they are updated when information on the subject's health profile grows with time. We focus in this work on statistical methods for quantifying and comparing dynamic predictive accuracy of this kind of prognostic models, accounting for right censoring and possibly competing events. Dynamic area under the ROC curve (AUC) and Brier Score (BS) are used to quantify predictive accuracy. Nonparametric inverse probability of censoring weighting is used to estimate dynamic curves of AUC and BS as functions of the time at which predictions are made. Asymptotic results are established and both pointwise confidence intervals and simultaneous confidence bands are derived. Tests are also proposed to compare the dynamic prediction accuracy curves of two prognostic models. The finite sample behavior of the inference procedures is assessed via simulations. We apply the proposed methodology to compare various prediction models using repeated measures of two psychometric tests to predict dementia in the elderly, accounting for the competing risk of death. Models are estimated on the French Paquid cohort and predictive accuracies are evaluated and compared on the French Three-City cohort. © 2014, The International Biometric Society.

  20. A Numerical Study of New Logistic Map

    NASA Astrophysics Data System (ADS)

    Khmou, Youssef

    In this paper, we propose a new logistic map based on the relation of the information entropy, we study the bifurcation diagram comparatively to the standard logistic map. In the first part, we compare the obtained diagram, by numerical simulations, with that of the standard logistic map. It is found that the structures of both diagrams are similar where the range of the growth parameter is restricted to the interval [0,e]. In the second part, we present an application of the proposed map in traffic flow using macroscopic model. It is found that the bifurcation diagram is an exact model of the Greenberg’s model of traffic flow where the growth parameter corresponds to the optimal velocity and the random sequence corresponds to the density. In the last part, we present a second possible application of the proposed map which consists of random number generation. The results of the analysis show that the excluded initial values of the sequences are (0,1).

  1. Modified optimal control pilot model for computer-aided design and analysis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1992-01-01

    This paper presents the theoretical development of a modified optimal control pilot model based upon the optimal control model (OCM) of the human operator developed by Kleinman, Baron, and Levison. This model is input compatible with the OCM and retains other key aspects of the OCM, such as a linear quadratic solution for the pilot gains with inclusion of control rate in the cost function, a Kalman estimator, and the ability to account for attention allocation and perception threshold effects. An algorithm designed for each implementation in current dynamic systems analysis and design software is presented. Example results based upon the analysis of a tracking task using three basic dynamic systems are compared with measured results and with similar analyses performed with the OCM and two previously proposed simplified optimal pilot models. The pilot frequency responses and error statistics obtained with this modified optimal control model are shown to compare more favorably to the measured experimental results than the other previously proposed simplified models evaluated.

  2. Distributed support modelling for vertical track dynamic analysis

    NASA Astrophysics Data System (ADS)

    Blanco, B.; Alonso, A.; Kari, L.; Gil-Negrete, N.; Giménez, J. G.

    2018-04-01

    The finite length nature of rail-pad supports is characterised by a Timoshenko beam element formulation over an elastic foundation, giving rise to the distributed support element. The new element is integrated into a vertical track model, which is solved in frequency and time domain. The developed formulation is obtained by solving the governing equations of a Timoshenko beam for this particular case. The interaction between sleeper and rail via the elastic connection is considered in an analytical, compact and efficient way. The modelling technique results in realistic amplitudes of the 'pinned-pinned' vibration mode and, additionally, it leads to a smooth evolution of the contact force temporal response and to reduced amplitudes of the rail vertical oscillation, as compared to the results from concentrated support models. Simulations are performed for both parametric and sinusoidal roughness excitation. The model of support proposed here is compared with a previous finite length model developed by other authors, coming to the conclusion that the proposed model gives accurate results at a reduced computational cost.

  3. Positive mood as a mediator of the relations among musical preference, postconsumption product evaluation, and consumer satisfaction.

    PubMed

    Teng, Ching-I; Tseng, Hsu-Min; Wu, Heng-Hui

    2007-06-01

    This study of how positive mood mediates the influences of musical preference and postconsumption product evaluation on consumer satisfaction focuses specifically on a model in which positive mood fully mediates the influences. The proposed model is compared with two competing models, and a structural equation model is used to test and compare the three theory-driven models. This study sampled 247 students majoring in management at a single university. They had mean age of 23 yr. (SD=2.5). This study used questionnaires to measure subjects' evaluations of a cup of coffee, preference for the music broadcast in the coffee shop, positive mood, and satisfaction after they had the coffee. Analysis indicated that the proposed model outperformed the two competing models in describing the data using chi-square difference tests. Positive mood was identified as a full mediator of the relationship between musical preference and consumer satisfaction. Moreover, the results demonstrate for service managers the importance of creating positive consumer mood.

  4. Comparing the Performance of Two Dynamic Load Distribution Methods

    NASA Technical Reports Server (NTRS)

    Kale, L. V.

    1987-01-01

    Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.

  5. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    PubMed Central

    Alguliyev, Rasim M.; Aliguliyev, Ramiz M.; Mahmudova, Rasmiyya S.

    2015-01-01

    Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. PMID:26516634

  6. Comparing biomarkers as principal surrogate endpoints.

    PubMed

    Huang, Ying; Gilbert, Peter B

    2011-12-01

    Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.

  7. Shape memory alloy smart knee spacer to enhance knee functionality: model design and finite element analysis.

    PubMed

    Gautam, Arvind; Rani, A Bhargavi; Callejas, Miguel A; Acharyya, Swati Ghosh; Acharyya, Amit; Biswas, Dwaipayan; Bhandari, Vasundhra; Sharma, Paresh; Naik, Ganesh R

    2016-08-01

    In this paper we introduce Shape Memory Alloy (SMA) for designing the tibial part of Total Knee Arthroplasty (TKA) by exploiting the shape-memory and pseudo-elasticity property of the SMA (e.g. NiTi). This would eliminate the drawbacks of the state-of-the art PMMA based knee-spacer including fracture, sustainability, dislocation, tilting, translation and subluxation for tackling the Osteoarthritis especially for the aged people of 45-plus or the athletes. In this paper a Computer Aided Design (CAD) model using SolidWorks for the knee-spacer is presented based on the proposed SMA adopting the state-of-the art industry-standard geometry that is used in the PMMA based spacer design. Subsequently Ansys based Finite Element Analysis is carried out to measure and compare the performance between the proposed SMA based model with the state-of-the art PMMA ones. 81% more bending is noticed in the PMMA based spacer compared to the proposed SMA that would eventually cause fracture and tilting or translation of spacer. Permanent shape deformation of approximately 58.75% in PMMA based spacer is observed compared to recoverable 11% deformation in SMA when same load is applied on both separately.

  8. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian T.; Scherzinger, William M.

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  9. A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.

    PubMed

    Mignotte, Max

    2010-06-01

    This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.

  10. Quantum Computation Using Optically Coupled Quantum Dot Arrays

    NASA Technical Reports Server (NTRS)

    Pradhan, Prabhakar; Anantram, M. P.; Wang, K. L.; Roychowhury, V. P.; Saini, Subhash (Technical Monitor)

    1998-01-01

    A solid state model for quantum computation has potential advantages in terms of the ease of fabrication, characterization, and integration. The fundamental requirements for a quantum computer involve the realization of basic processing units (qubits), and a scheme for controlled switching and coupling among the qubits, which enables one to perform controlled operations on qubits. We propose a model for quantum computation based on optically coupled quantum dot arrays, which is computationally similar to the atomic model proposed by Cirac and Zoller. In this model, individual qubits are comprised of two coupled quantum dots, and an array of these basic units is placed in an optical cavity. Switching among the states of the individual units is done by controlled laser pulses via near field interaction using the NSOM technology. Controlled rotations involving two or more qubits are performed via common cavity mode photon. We have calculated critical times, including the spontaneous emission and switching times, and show that they are comparable to the best times projected for other proposed models of quantum computation. We have also shown the feasibility of accessing individual quantum dots using the NSOM technology by calculating the photon density at the tip, and estimating the power necessary to perform the basic controlled operations. We are currently in the process of estimating the decoherence times for this system; however, we have formulated initial arguments which seem to indicate that the decoherence times will be comparable, if not longer, than many other proposed models.

  11. EVALUATION OF ACID DEPOSITION MODELS USING PRINCIPAL COMPONENT SPACES

    EPA Science Inventory

    An analytical technique involving principal components analysis is proposed for use in the evaluation of acid deposition models. elationships among model predictions are compared to those among measured data, rather than the more common one-to-one comparison of predictions to mea...

  12. Proposals for enhanced health risk assessment and stratification in an integrated care scenario

    PubMed Central

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-01-01

    Objectives Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. Settings The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Participants Responsible teams for regional data management in the five ACT regions. Primary and secondary outcome measures We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. Results There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. Conclusions The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. PMID:27084274

  13. A simple method for assessing occupational exposure via the one-way random effects model.

    PubMed

    Krishnamoorthy, K; Mathew, Thomas; Peng, Jie

    2016-11-01

    A one-way random effects model is postulated for the log-transformed shift-long personal exposure measurements, where the random effect in the model represents an effect due to the worker. Simple closed-form confidence intervals are proposed for the relevant parameters of interest using the method of variance estimates recovery (MOVER). The performance of the confidence bounds is evaluated and compared with those based on the generalized confidence interval approach. Comparison studies indicate that the proposed MOVER confidence bounds are better than the generalized confidence bounds for the overall mean exposure and an upper percentile of the exposure distribution. The proposed methods are illustrated using a few examples involving industrial hygiene data.

  14. Parent Education within a Relationship-Focused Model.

    ERIC Educational Resources Information Center

    Kelly, Jean F.; Barnard, Kathryn E.

    1999-01-01

    This response to Mahoney et al. (EC 623 392) agrees that parent education should be an important component of early intervention programs and proposes that parent education be included in a relationship-focused early-intervention model. This model is illustrated, explained, and compared with the previous child-focused model and the current…

  15. Confidence Intervals for a Semiparametric Approach to Modeling Nonlinear Relations among Latent Variables

    ERIC Educational Resources Information Center

    Pek, Jolynn; Losardo, Diane; Bauer, Daniel J.

    2011-01-01

    Compared to parametric models, nonparametric and semiparametric approaches to modeling nonlinearity between latent variables have the advantage of recovering global relationships of unknown functional form. Bauer (2005) proposed an indirect application of finite mixtures of structural equation models where latent components are estimated in the…

  16. A Model of Comparative Ethics Education for Social Workers

    ERIC Educational Resources Information Center

    Pugh, Greg L.

    2017-01-01

    Social work ethics education models have not effectively engaged social workers in practice in formal ethical reasoning processes, potentially allowing personal bias to affect ethical decisions. Using two of the primary ethical models from medicine, a new social work ethics model for education and practical application is proposed. The strengths…

  17. A stepwise model to predict monthly streamflow

    NASA Astrophysics Data System (ADS)

    Mahmood Al-Juboori, Anas; Guven, Aytac

    2016-12-01

    In this study, a stepwise model empowered with genetic programming is developed to predict the monthly flows of Hurman River in Turkey and Diyalah and Lesser Zab Rivers in Iraq. The model divides the monthly flow data to twelve intervals representing the number of months in a year. The flow of a month, t is considered as a function of the antecedent month's flow (t - 1) and it is predicted by multiplying the antecedent monthly flow by a constant value called K. The optimum value of K is obtained by a stepwise procedure which employs Gene Expression Programming (GEP) and Nonlinear Generalized Reduced Gradient Optimization (NGRGO) as alternative to traditional nonlinear regression technique. The degree of determination and root mean squared error are used to evaluate the performance of the proposed models. The results of the proposed model are compared with the conventional Markovian and Auto Regressive Integrated Moving Average (ARIMA) models based on observed monthly flow data. The comparison results based on five different statistic measures show that the proposed stepwise model performed better than Markovian model and ARIMA model. The R2 values of the proposed model range between 0.81 and 0.92 for the three rivers in this study.

  18. Congestion Pricing for Aircraft Pushback Slot Allocation.

    PubMed

    Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei

    2017-01-01

    In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.

  19. Congestion Pricing for Aircraft Pushback Slot Allocation

    PubMed Central

    Zhang, Yaping

    2017-01-01

    In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429

  20. A Space Weather Forecasting System with Multiple Satellites Based on a Self-Recognizing Network

    PubMed Central

    Tokumitsu, Masahiro; Ishida, Yoshiteru

    2014-01-01

    This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron flux (>2 MeV). The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic field and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron flux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing. PMID:24803190

  1. A space weather forecasting system with multiple satellites based on a self-recognizing network.

    PubMed

    Tokumitsu, Masahiro; Ishida, Yoshiteru

    2014-05-05

    This paper proposes a space weather forecasting system at geostationary orbit for high-energy electron flux (>2 MeV). The forecasting model involves multiple sensors on multiple satellites. The sensors interconnect and evaluate each other to predict future conditions at geostationary orbit. The proposed forecasting model is constructed using a dynamic relational network for sensor diagnosis and event monitoring. The sensors of the proposed model are located at different positions in space. The satellites for solar monitoring equip with monitoring devices for the interplanetary magnetic field and solar wind speed. The satellites orbit near the Earth monitoring high-energy electron flux. We investigate forecasting for typical two examples by comparing the performance of two models with different numbers of sensors. We demonstrate the prediction by the proposed model against coronal mass ejections and a coronal hole. This paper aims to investigate a possibility of space weather forecasting based on the satellite network with in-situ sensing.

  2. Method to determine the optimal constitutive model from spherical indentation tests

    NASA Astrophysics Data System (ADS)

    Zhang, Tairui; Wang, Shang; Wang, Weiqiang

    2018-03-01

    The limitation of current indentation theories was investigated and a method to determine the optimal constitutive model through spherical indentation tests was proposed. Two constitutive models, the Power-law and the Linear-law, were used in Finite Element (FE) calculations, and then a set of indentation governing equations was established for each model. The load-depth data from the normal indentation depth was used to fit the best parameters in each constitutive model while the data from the further loading part was compared with those from FE calculations, and the model that better predicted the further deformation was considered the optimal one. Moreover, a Yang's modulus calculation model which took the previous plastic deformation and the phenomenon of pile-up (or sink-in) into consideration was also proposed to revise the original Sneddon-Pharr-Oliver model. The indentation results on six materials, 304, 321, SA508, SA533, 15CrMoR, and Fv520B, were compared with tensile ones, which validated the reliability of the revised E calculation model and the optimal constitutive model determination method in this study.

  3. Mosaic anisotropy model for magnetic interactions in mesostructured crystals

    NASA Astrophysics Data System (ADS)

    Goldman, Abby R.; Asenath-Smith, Emily; Estroff, Lara A.

    2017-10-01

    We propose a new model for interpreting the magnetic interactions in crystals with mosaic texture called the mosaic anisotropy (MA) model. We test the MA model using hematite as a model system, comparing mosaic crystals to polycrystals, single crystal nanoparticles, and bulk single crystals. Vibrating sample magnetometry confirms the hypothesis of the MA model that mosaic crystals have larger remanence (Mr/Ms) and coercivity (Hc) compared to polycrystalline or bulk single crystals. By exploring the magnetic properties of mesostructured crystalline materials, we may be able to develop new routes to engineering harder magnets.

  4. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  5. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  6. Planning Through Incrementalism

    ERIC Educational Resources Information Center

    Lasserre, Ph.

    1974-01-01

    An incremental model of decisionmaking is discussed and compared with the Comprehensive Rational Approach. A model of reconciliation between the two approaches is proposed, and examples are given in the field of economic development and educational planning. (Author/DN)

  7. Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models

    NASA Astrophysics Data System (ADS)

    Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria

    2017-08-01

    In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.

  8. Danish Passage Graves, "Spring/Summer/Fall full Moons" and Lunar Standstills

    NASA Astrophysics Data System (ADS)

    Clausen, Claus Jørgen

    2015-05-01

    The author proposes and discusses a model for azimuth distribution which involves the criterion of a 'spring full moon' (or a 'fall full moon') proposed by Marciano Da Silva (Da Silva 2004). The model is based on elements of the rising pattern of the summer full moon combined with directions pointing towards full moonrises which occur immediately prior to lunar standstill eclipses and directions aimed at the points at which these eclipses begin. An observed sample of 153 directions has been compared with the proposed model, which has been named the lunar 'season pointer'. Statistical tests show that the model fits well with the observed sample within the azimuth interval of 54.5° to 156.5°. The conclusion made is that at least the 'season pointer' section of the model used could very well explain the observed distribution.

  9. ECG fiducial point extraction using switching Kalman filter.

    PubMed

    Akhbari, Mahsa; Ghahjaverestan, Nasim Montazeri; Shamsollahi, Mohammad B; Jutten, Christian

    2018-04-01

    In this paper, we propose a novel method for extracting fiducial points (FPs) of the beats in electrocardiogram (ECG) signals using switching Kalman filter (SKF). In this method, according to McSharry's model, ECG waveforms (P-wave, QRS complex and T-wave) are modeled with Gaussian functions and ECG baselines are modeled with first order auto regressive models. In the proposed method, a discrete state variable called "switch" is considered that affects only the observation equations. We denote a mode as a specific observation equation and switch changes between 7 modes and corresponds to different segments of an ECG beat. At each time instant, the probability of each mode is calculated and compared among two consecutive modes and a path is estimated, which shows the relation of each part of the ECG signal to the mode with the maximum probability. ECG FPs are found from the estimated path. For performance evaluation, the Physionet QT database is used and the proposed method is compared with methods based on wavelet transform, partially collapsed Gibbs sampler (PCGS) and extended Kalman filter. For our proposed method, the mean error and the root mean square error across all FPs are 2 ms (i.e. less than one sample) and 14 ms, respectively. These errors are significantly smaller than those obtained using other methods. The proposed method achieves lesser RMSE and smaller variability with respect to others. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. Damage evaluation of reinforced concrete frame based on a combined fiber beam model

    NASA Astrophysics Data System (ADS)

    Shang, Bing; Liu, ZhanLi; Zhuang, Zhuo

    2014-04-01

    In order to analyze and simulate the impact collapse or seismic response of the reinforced concrete (RC) structures, a combined fiber beam model is proposed by dividing the cross section of RC beam into concrete fiber and steel fiber. The stress-strain relationship of concrete fiber is based on a model proposed by concrete codes for concrete structures. The stress-strain behavior of steel fiber is based on a model suggested by others. These constitutive models are implemented into a general finite element program ABAQUS through the user defined subroutines to provide effective computational tools for the inelastic analysis of RC frame structures. The fiber model proposed in this paper is validated by comparing with experiment data of the RC column under cyclical lateral loading. The damage evolution of a three-dimension frame subjected to impact loading is also investigated.

  11. An intermittency model for predicting roughness induced transition

    NASA Astrophysics Data System (ADS)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  12. Modeling and performance analysis of an improved movement-based location management scheme for packet-switched mobile communication systems.

    PubMed

    Chung, Yun Won; Kwon, Jae Kyun; Park, Suwon

    2014-01-01

    One of the key technologies to support mobility of mobile station (MS) in mobile communication systems is location management which consists of location update and paging. In this paper, an improved movement-based location management scheme with two movement thresholds is proposed, considering bursty data traffic characteristics of packet-switched (PS) services. The analytical modeling for location update and paging signaling loads of the proposed scheme is developed thoroughly and the performance of the proposed scheme is compared with that of the conventional scheme. We show that the proposed scheme outperforms the conventional scheme in terms of total signaling load with an appropriate selection of movement thresholds.

  13. A unified framework for group independent component analysis for multi-subject fMRI data

    PubMed Central

    Guo, Ying; Pagnoni, Giuseppe

    2008-01-01

    Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105

  14. Real external predictivity of QSAR models: how to evaluate it? Comparison of different validation criteria and proposal of using the concordance correlation coefficient.

    PubMed

    Chirico, Nicola; Gramatica, Paola

    2011-09-26

    The main utility of QSAR models is their ability to predict activities/properties for new chemicals, and this external prediction ability is evaluated by means of various validation criteria. As a measure for such evaluation the OECD guidelines have proposed the predictive squared correlation coefficient Q(2)(F1) (Shi et al.). However, other validation criteria have been proposed by other authors: the Golbraikh-Tropsha method, r(2)(m) (Roy), Q(2)(F2) (Schüürmann et al.), Q(2)(F3) (Consonni et al.). In QSAR studies these measures are usually in accordance, though this is not always the case, thus doubts can arise when contradictory results are obtained. It is likely that none of the aforementioned criteria is the best in every situation, so a comparative study using simulated data sets is proposed here, using threshold values suggested by the proponents or those widely used in QSAR modeling. In addition, a different and simple external validation measure, the concordance correlation coefficient (CCC), is proposed and compared with other criteria. Huge data sets were used to study the general behavior of validation measures, and the concordance correlation coefficient was shown to be the most restrictive. On using simulated data sets of a more realistic size, it was found that CCC was broadly in agreement, about 96% of the time, with other validation measures in accepting models as predictive, and in almost all the examples it was the most precautionary. The proposed concordance correlation coefficient also works well on real data sets, where it seems to be more stable, and helps in making decisions when the validation measures are in conflict. Since it is conceptually simple, and given its stability and restrictiveness, we propose the concordance correlation coefficient as a complementary, or alternative, more prudent measure of a QSAR model to be externally predictive.

  15. Modeling and Simulation of Phased Array Antennas to Support Next-Generation Satellite Design

    NASA Technical Reports Server (NTRS)

    Tchorowski, Nicole; Murawski, Robert; Manning, Robert; Fuentes, Michael

    2016-01-01

    Developing enhanced simulation capabilities has become a significant priority for the Space Communications and Navigation (SCaN) project at NASA as new space communications technologies are proposed to replace aging NASA communications assets, such as the Tracking and Data Relay Satellite System (TDRSS). When developing the architecture for these new space communications assets, it is important to develop updated modeling and simulation methodologies, such that competing architectures can be weighed against one another and the optimal path forward can be determined. There have been many simulation tools developed here at NASA for the simulation of single RF link budgets, or for the modeling and simulation of an entire network of spacecraft and their supporting SCaN network elements. However, the modeling capabilities are never fully complete and as new technologies are proposed, gaps are identified. One such gap is the ability to rapidly develop high fidelity simulation models of electronically steerable phased array systems. As future relay satellite architectures are proposed that include optical communications links, electronically steerable antennas will become more desirable due to the reduction in platform vibration introduced by mechanically steerable devices. In this research, we investigate how modeling of these antennas can be introduced into out overall simulation and modeling structure. The ultimate goal of this research is two-fold. First, to enable NASA engineers to model various proposed simulation architectures and determine which proposed architecture meets the given architectural requirements. Second, given a set of communications link requirements for a proposed satellite architecture, determine the optimal configuration for a phased array antenna. There is a variety of tools available that can be used to model phased array antennas. To meet our stated goals, the first objective of this research is to compare the subset of tools available to us, trading-off modeling fidelity of the tool with simulation performance. When comparing several proposed architectures, higher- fidelity modeling may be desirable, however, when iterating a proposed set of communication link requirements across ranges of phased array configuration parameters, the practicality of performance becomes a significant requirement. In either case, a minimum simulation - fidelity must be met, regardless of performance considerations, which will be discussed in this research. Given a suitable set of phased array modeling tools, this research then focuses on integration with current SCaN modeling and simulation tools. While properly modeling the antenna elements of a system are vital, this is only a small part of the end-to-end communication path between a satellite and the supporting ground station and/or relay satellite assets. To properly model a proposed simulation architecture, this toolset must be integrated with other commercial and government development tools, such that the overall architecture can be examined in terms of communications, reliability, and cost. In this research, integration with previously developed communication tools is investigated.

  16. PSNet: prostate segmentation on MRI based on a convolutional neural network.

    PubMed

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei

    2018-04-01

    Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.

  17. Robust independent modal space control of a coupled nano-positioning piezo-stage

    NASA Astrophysics Data System (ADS)

    Zhu, Wei; Yang, Fufeng; Rui, Xiaoting

    2018-06-01

    In order to accurately control a coupled 3-DOF nano-positioning piezo-stage, this paper designs a hybrid controller. In this controller, a hysteresis observer based on a Bouc-Wen model is established to compensate the hysteresis nonlinearity of the piezoelectric actuator first. Compared to hysteresis compensations using Preisach model and Prandt-Ishlinskii model, the compensation method using the hysteresis observer is computationally lighter. Then, based on the proposed dynamics model, by constructing the modal filter, a robust H∞ independent modal space controller is designed and utilized to decouple the piezo-stage and deal with the unmodeled dynamics, disturbance, and hysteresis compensation error. The effectiveness of the proposed controller is demonstrated experimentally. The experimental results show that the proposed controller can significantly achieve the high-precision positioning.

  18. Zipper model for the melting of thin films

    NASA Astrophysics Data System (ADS)

    Abdullah, Mikrajuddin; Khairunnisa, Shafira; Akbar, Fathan

    2016-01-01

    We propose an alternative model to Lindemann’s criterion for melting that explains the melting of thin films on the basis of a molecular zipper-like mechanism. Using this model, a unique criterion for melting is obtained. We compared the results of the proposed model with experimental data of melting points and heat of fusion for many materials and obtained interesting results. The interesting thing reported here is how complex physics problems can sometimes be modeled with simple objects around us that seemed to have no correlation. This kind of approach is sometimes very important in physics education and should always be taught to undergraduate or graduate students.

  19. Signal processing system for electrotherapy applications

    NASA Astrophysics Data System (ADS)

    Płaza, Mirosław; Szcześniak, Zbigniew

    2017-08-01

    The system of signal processing for electrotherapeutic applications is proposed in the paper. The system makes it possible to model the curve of threshold human sensitivity to current (Dalziel's curve) in full medium frequency range (1kHz-100kHz). The tests based on the proposed solution were conducted and their results were compared with those obtained according to the assumptions of High Tone Power Therapy method and referred to optimum values. Proposed system has high dynamics and precision of mapping the curve of threshold human sensitivity to current and can be used in all methods where threshold curves are modelled.

  20. Comparative modeling of coevolution in communities of unicellular organisms: adaptability and biodiversity.

    PubMed

    Lashin, Sergey A; Suslov, Valentin V; Matushkin, Yuri G

    2010-06-01

    We propose an original program "Evolutionary constructor" that is capable of computationally efficient modeling of both population-genetic and ecological problems, combining these directions in one model of required detail level. We also present results of comparative modeling of stability, adaptability and biodiversity dynamics in populations of unicellular haploid organisms which form symbiotic ecosystems. The advantages and disadvantages of two evolutionary strategies of biota formation--a few generalists' taxa-based biota formation and biodiversity-based biota formation--are discussed.

  1. A simple computational algorithm of model-based choice preference.

    PubMed

    Toyama, Asako; Katahira, Kentaro; Ohira, Hideki

    2017-08-01

    A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.

  2. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  3. Development and Implementation of a Telecommuting Evaluation Framework, and Modeling the Executive Telecommuting Adoption Process

    NASA Astrophysics Data System (ADS)

    Vora, V. P.; Mahmassani, H. S.

    2002-02-01

    This work proposes and implements a comprehensive evaluation framework to document the telecommuter, organizational, and societal impacts of telecommuting through telecommuting programs. Evaluation processes and materials within the outlined framework are also proposed and implemented. As the first component of the evaluation process, the executive survey is administered within a public sector agency. The survey data is examined through exploratory analysis and is compared to a previous survey of private sector executives. The ordinal probit, dynamic probit, and dynamic generalized ordinal probit (DGOP) models of telecommuting adoption are calibrated to identify factors which significantly influence executive adoption preferences and to test the robustness of such factors. The public sector DGOP model of executive willingness to support telecommuting under different program scenarios is compared with an equivalent private sector DGOP model. Through the telecommuting program, a case study of telecommuting travel impacts is performed to further substantiate research.

  4. The Next Generation of Disproportionality Research: Toward a Comparative Model in the Study of Equity in Ability Differences

    ERIC Educational Resources Information Center

    Artiles, Alfredo J.; Bal, Aydin

    2008-01-01

    Minority student disproportionate representation in special education has been debated and (increasingly) studied in the United States for the past 40 years. The purpose of this article is to place this problem in the larger arena of equity studies related to "difference" in educational practice and propose a comparative model to study…

  5. A Model for the Determination of the Costs of Special Education as Compared with That for General Education. Reading Draft.

    ERIC Educational Resources Information Center

    Ernst and Ernst, Chicago, IL.

    Proposed in the report is a model quantitative cost accounting system designed to help school districts gather and report data useful in determining equitable reimbursement formulas for special education as compared with general education. Included are sections on the approach and methodology used to construct a hypothetical school district,…

  6. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    NASA Astrophysics Data System (ADS)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  7. Cerebellum-inspired neural network solution of the inverse kinematics problem.

    PubMed

    Asadi-Eydivand, Mitra; Ebadzadeh, Mohammad Mehdi; Solati-Hashjin, Mehran; Darlot, Christian; Abu Osman, Noor Azuan

    2015-12-01

    The demand today for more complex robots that have manipulators with higher degrees of freedom is increasing because of technological advances. Obtaining the precise movement for a desired trajectory or a sequence of arm and positions requires the computation of the inverse kinematic (IK) function, which is a major problem in robotics. The solution of the IK problem leads robots to the precise position and orientation of their end-effector. We developed a bioinspired solution comparable with the cerebellar anatomy and function to solve the said problem. The proposed model is stable under all conditions merely by parameter determination, in contrast to recursive model-based solutions, which remain stable only under certain conditions. We modified the proposed model for the simple two-segmented arm to prove the feasibility of the model under a basic condition. A fuzzy neural network through its learning method was used to compute the parameters of the system. Simulation results show the practical feasibility and efficiency of the proposed model in robotics. The main advantage of the proposed model is its generalizability and potential use in any robot.

  8. The multiaxial fatigue response of cylindrical geometry under proportional loading subject to fluctuating tractions

    NASA Astrophysics Data System (ADS)

    Martinez, Rudy D.

    A multiaxial fatigue model is proposed, as it would apply to cylindrical geometry in the form of industrial sized pressure vessels. The main focus of the multiaxial fatigue model will be based on using energy methods with the loading states confined to fluctuating tractions under proportional loading. The proposed fatigue model is an effort to support and enhance existing fatigue life predicting methods for pressure vessel design, beyond the ASME Boiler and Pressure Vessel codes, ASME Section VIII Division 2 and 3, which is currently used in industrial engineering practice for pressure vessel design. Both uniaxial and biaxial low alloy pearlittic-ferritic steel cylindrical cyclic test data are utilized to substantiate the proposed fatigue model. Approximate material hardening and softening aspects from applied load cycling states and the Bauschinger effect are accounted for by adjusting strain control generated hysteresis loops and the cyclic stress strain curve. The proposed fatigue energy model and the current ASME fatigue model are then compared with regards to the accuracy of predicting fatigue life cycle consistencies.

  9. A Comprehensive Study on Pyrolysis Mechanism of Substituted β-O-4 Type Lignin Dimers.

    PubMed

    Jiang, Xiaoyan; Lu, Qiang; Hu, Bin; Liu, Ji; Dong, Changqing; Yang, Yongping

    2017-11-09

    In order to understand the pyrolysis mechanism of β- O -4 type lignin dimers, a pyrolysis model is proposed which considers the effects of functional groups (hydroxyl, hydroxymethyl and methoxyl) on the alkyl side chain and aromatic ring. Furthermore, five specific β- O -4 type lignin dimer model compounds are selected to investigate their integrated pyrolysis mechanism by density functional theory (DFT) methods, to further understand and verify the proposed pyrolysis model. The results indicate that a total of 11 pyrolysis mechanisms, including both concerted mechanisms and homolytic mechanisms, might occur for the initial pyrolysis of the β- O -4 type lignin dimers. Concerted mechanisms are predominant as compared with homolytic mechanisms throughout unimolecular decomposition pathways. The competitiveness of the eleven pyrolysis mechanisms are revealed via different model compounds, and the proposed pyrolysis model is ranked in full consideration of functional groups effects. The proposed pyrolysis model can provide a theoretical basis to predict the reaction pathways and products during the pyrolysis process of β- O -4 type lignin dimers.

  10. A Comprehensive Study on Pyrolysis Mechanism of Substituted β-O-4 Type Lignin Dimers

    PubMed Central

    Jiang, Xiaoyan; Lu, Qiang; Hu, Bin; Liu, Ji; Dong, Changqing; Yang, Yongping

    2017-01-01

    In order to understand the pyrolysis mechanism of β-O-4 type lignin dimers, a pyrolysis model is proposed which considers the effects of functional groups (hydroxyl, hydroxymethyl and methoxyl) on the alkyl side chain and aromatic ring. Furthermore, five specific β-O-4 type lignin dimer model compounds are selected to investigate their integrated pyrolysis mechanism by density functional theory (DFT) methods, to further understand and verify the proposed pyrolysis model. The results indicate that a total of 11 pyrolysis mechanisms, including both concerted mechanisms and homolytic mechanisms, might occur for the initial pyrolysis of the β-O-4 type lignin dimers. Concerted mechanisms are predominant as compared with homolytic mechanisms throughout unimolecular decomposition pathways. The competitiveness of the eleven pyrolysis mechanisms are revealed via different model compounds, and the proposed pyrolysis model is ranked in full consideration of functional groups effects. The proposed pyrolysis model can provide a theoretical basis to predict the reaction pathways and products during the pyrolysis process of β-O-4 type lignin dimers. PMID:29120350

  11. Parametric Study of Shear Strength of Concrete Beams Reinforced with FRP Bars

    NASA Astrophysics Data System (ADS)

    Thomas, Job; Ramadass, S.

    2016-09-01

    Fibre Reinforced Polymer (FRP) bars are being widely used as internal reinforcement in structural elements in the last decade. The corrosion resistance of FRP bars qualifies its use in severe and marine exposure conditions in structures. A total of eight concrete beams longitudinally reinforced with FRP bars were cast and tested over shear span to depth ratio of 0.5 and 1.75. The shear strength test data of 188 beams published in various literatures were also used. The model originally proposed by Indian Standard Code of practice for the prediction of shear strength of concrete beams reinforced with steel bars IS:456 (Plain and reinforced concrete, code of practice, fourth revision. Bureau of Indian Standards, New Delhi, 2000) is considered and a modification to account for the influence of the FRP bars is proposed based on regression analysis. Out of the 196 test data, 110 test data is used for the regression analysis and 86 test data is used for the validation of the model. In addition, the shear strength of 86 test data accounted for the validation is assessed using eleven models proposed by various researchers. The proposed model accounts for compressive strength of concrete ( f ck ), modulus of elasticity of FRP rebar ( E f ), longitudinal reinforcement ratio ( ρ f ), shear span to depth ratio ( a/ d) and size effect of beams. The predicted shear strength of beams using the proposed model and 11 models proposed by other researchers is compared with the corresponding experimental results. The mean of predicted shear strength to the experimental shear strength for the 86 beams accounted for the validation of the proposed model is found to be 0.93. The result of the statistical analysis indicates that the prediction based on the proposed model corroborates with the corresponding experimental data.

  12. Statistical Modeling of Retinal Optical Coherence Tomography.

    PubMed

    Amini, Zahra; Rabbani, Hossein

    2016-06-01

    In this paper, a new model for retinal Optical Coherence Tomography (OCT) images is proposed. This statistical model is based on introducing a nonlinear Gaussianization transform to convert the probability distribution function (pdf) of each OCT intra-retinal layer to a Gaussian distribution. The retina is a layered structure and in OCT each of these layers has a specific pdf which is corrupted by speckle noise, therefore a mixture model for statistical modeling of OCT images is proposed. A Normal-Laplace distribution, which is a convolution of a Laplace pdf and Gaussian noise, is proposed as the distribution of each component of this model. The reason for choosing Laplace pdf is the monotonically decaying behavior of OCT intensities in each layer for healthy cases. After fitting a mixture model to the data, each component is gaussianized and all of them are combined by Averaged Maximum A Posterior (AMAP) method. To demonstrate the ability of this method, a new contrast enhancement method based on this statistical model is proposed and tested on thirteen healthy 3D OCTs taken by the Topcon 3D OCT and five 3D OCTs from Age-related Macular Degeneration (AMD) patients, taken by Zeiss Cirrus HD-OCT. Comparing the results with two contending techniques, the prominence of the proposed method is demonstrated both visually and numerically. Furthermore, to prove the efficacy of the proposed method for a more direct and specific purpose, an improvement in the segmentation of intra-retinal layers using the proposed contrast enhancement method as a preprocessing step, is demonstrated.

  13. Optical properties of light absorbing carbon aggregates mixed with sulfate: assessment of different model geometries for climate forcing calculations.

    PubMed

    Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin

    2012-04-23

    Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America

  14. [GSH fermentation process modeling using entropy-criterion based RBF neural network model].

    PubMed

    Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng

    2008-05-01

    The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.

  15. PTSD's latent structure in Malaysian tsunami victims: assessing the newly proposed Dysphoric Arousal model.

    PubMed

    Armour, Cherie; Raudzah Ghazali, Siti; Elklit, Ask

    2013-03-30

    The underlying latent structure of Posttraumatic Stress Disorder (PTSD) is widely researched. However, despite a plethora of factor analytic studies, no single model has consistently been shown as superior to alternative models. The two most often supported models are the Emotional Numbing and the Dysphoria models. However, a recently proposed five-factor Dysphoric Arousal model has been gathering support over and above existing models. Data for the current study were gathered from Malaysian Tsunami survivors (N=250). Three competing models (Emotional Numbing/Dysphoria/Dysphoric Arousal) were specified and estimated using Confirmatory Factor Analysis (CFA). The Dysphoria model provided superior fit to the data compared to the Emotional Numbing model. However, using chi-square difference tests, the Dysphoric Arousal model showed a superior fit compared to both the Emotional Numbing and Dysphoria models. In conclusion, the current results suggest that the Dysphoric Arousal model better represents PTSD's latent structure and that items measuring sleeping difficulties, irritability/anger and concentration difficulties form a separate, unique PTSD factor. These results are discussed in relation to the role of Hyperarousal in PTSD's on-going symptom maintenance and in relation to the DSM-5. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Vehicle Surveillance with a Generic, Adaptive, 3D Vehicle Model.

    PubMed

    Leotta, Matthew J; Mundy, Joseph L

    2011-07-01

    In automated surveillance, one is often interested in tracking road vehicles, measuring their shape in 3D world space, and determining vehicle classification. To address these tasks simultaneously, an effective approach is the constrained alignment of a prior model of 3D vehicle shape to images. Previous 3D vehicle models are either generic but overly simple or rigid and overly complex. Rigid models represent exactly one vehicle design, so a large collection is needed. A single generic model can deform to a wide variety of shapes, but those shapes have been far too primitive. This paper uses a generic 3D vehicle model that deforms to match a wide variety of passenger vehicles. It is adjustable in complexity between the two extremes. The model is aligned to images by predicting and matching image intensity edges. Novel algorithms are presented for fitting models to multiple still images and simultaneous tracking while estimating shape in video. Experiments compare the proposed model to simple generic models in accuracy and reliability of 3D shape recovery from images and tracking in video. Standard techniques for classification are also used to compare the models. The proposed model outperforms the existing simple models at each task.

  17. A probability distribution model of tooth pits for evaluating time-varying mesh stiffness of pitting gears

    NASA Astrophysics Data System (ADS)

    Lei, Yaguo; Liu, Zongyao; Wang, Delong; Yang, Xiao; Liu, Huan; Lin, Jing

    2018-06-01

    Tooth damage often causes a reduction in gear mesh stiffness. Thus time-varying mesh stiffness (TVMS) can be treated as an indication of gear health conditions. This study is devoted to investigating the mesh stiffness variations of a pair of external spur gears with tooth pitting, and proposes a new model for describing tooth pitting based on probability distribution. In the model, considering the appearance and development process of tooth pitting, we model the pitting on the surface of spur gear teeth as a series of pits with a uniform distribution in the direction of tooth width and a normal distribution in the direction of tooth height, respectively. In addition, four pitting degrees, from no pitting to severe pitting, are modeled. Finally, influences of tooth pitting on TVMS are analyzed in details and the proposed model is validated by comparing with a finite element model. The comparison results show that the proposed model is effective for the TVMS evaluations of pitting gears.

  18. Phenomenological model of visual acuity

    NASA Astrophysics Data System (ADS)

    Gómez-Pedrero, José A.; Alonso, José

    2016-12-01

    We propose in this work a model for describing visual acuity (V) as a function of defocus and pupil diameter. Although the model is mainly based on geometrical optics, it also incorporates nongeometrical effects phenomenologically. Compared to similar visual acuity models, the proposed one considers the effect of astigmatism and the variability of best corrected V among individuals; it also takes into account the accommodation and the "tolerance to defocus," the latter through a phenomenological parameter. We have fitted the model to the V data provided in the works of Holladay et al. and Peters, showing the ability of this model to accurately describe the variation of V against blur and pupil diameter. We have also performed a comparison between the proposed model and others previously published in the literature. The model is mainly intended for use in the design of ophthalmic compensations, but it can also be useful in other fields such as visual ergonomics, design of visual tests, and optical instrumentation.

  19. Extension of the Haseman-Elston regression model to longitudinal data.

    PubMed

    Won, Sungho; Elston, Robert C; Park, Taesung

    2006-01-01

    We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.

  20. Induced subgraph searching for geometric model fitting

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  1. Short-range quantitative precipitation forecasting using Deep Learning approaches

    NASA Astrophysics Data System (ADS)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Predicting short-range quantitative precipitation is very important for flood forecasting, early flood warning and other hydrometeorological purposes. This study aims to improve the precipitation forecasting skills using a recently developed and advanced machine learning technique named Long Short-Term Memory (LSTM). The proposed LSTM learns the changing patterns of clouds from Cloud-Top Brightness Temperature (CTBT) images, retrieved from the infrared channel of Geostationary Operational Environmental Satellite (GOES), using a sophisticated and effective learning method. After learning the dynamics of clouds, the LSTM model predicts the upcoming rainy CTBT events. The proposed model is then merged with a precipitation estimation algorithm termed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) to provide precipitation forecasts. The results of merged LSTM with PERSIANN are compared to the results of an Elman-type Recurrent Neural Network (RNN) merged with PERSIANN and Final Analysis of Global Forecast System model over the states of Oklahoma, Florida and Oregon. The performance of each model is investigated during 3 storm events each located over one of the study regions. The results indicate the outperformance of merged LSTM forecasts comparing to the numerical and statistical baselines in terms of Probability of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), RMSE and correlation coefficient especially in convective systems. The proposed method shows superior capabilities in short-term forecasting over compared methods.

  2. A New Comptonization Model for Weakly Magnetized Accreting NS LMXBs

    NASA Astrophysics Data System (ADS)

    Paizis, A.; Farinelli, R.; Titarchuk, L.; Frontera, F.; Cocchi, M.; Ferrigno, C.

    2009-05-01

    We have developed a new Comptonization model to propose, for the first time, a self consistent physical interpretation of the complex spectral evolution seen in NS LMXBs. The model and its application to LMXBs are presented and compared to the Simbol-X expected capabilities.

  3. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  4. Implications of a Need-Press-Competence Model for Institutionalized Elderly.

    ERIC Educational Resources Information Center

    Wirzbicki, Philip J.; Smith, Barry D.

    The predictive utility of a proposed need-press competence (NPC) model of satisfaction was compared with that of the traditional need-press fit model. Structured interviews with 30 residents from two nursing homes provided measures of needs, press, competence, and satisfaction. The NPC model was a better predictor of expressed satisfaction than…

  5. Affordability Funding Models for Early Childhood Services

    ERIC Educational Resources Information Center

    Purcal, Christiane; Fisher, Karen

    2006-01-01

    This paper presents a model of the approaches open to government to ensure that early childhood services are affordable to families. We derived the model from a comparative literature review of affordability approaches taken by government, both in Australia and internationally. The model adds significantly to the literature by proposing a means to…

  6. Comparing Three Patterns of Strengths and Weaknesses Models for the Identification of Specific Learning Disabilities

    ERIC Educational Resources Information Center

    Miller, Daniel C.; Maricle, Denise E.; Jones, Alicia M.

    2016-01-01

    Processing Strengths and Weaknesses (PSW) models have been proposed as a method for identifying specific learning disabilities. Three PSW models were examined for their ability to predict expert identified specific learning disabilities cases. The Dual Discrepancy/Consistency Model (DD/C; Flanagan, Ortiz, & Alfonso, 2013) as operationalized by…

  7. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  8. Latent component-based gear tooth fault detection filter using advanced parametric modeling

    NASA Astrophysics Data System (ADS)

    Ettefagh, M. M.; Sadeghi, M. H.; Rezaee, M.; Chitsaz, S.

    2009-10-01

    In this paper, a new parametric model-based filter is proposed for gear tooth fault detection. The designing of the filter consists of identifying the most proper latent component (LC) of the undamaged gearbox signal by analyzing the instant modules (IMs) and instant frequencies (IFs) and then using the component with lowest IM as the proposed filter output for detecting fault of the gearbox. The filter parameters are estimated by using the LC theory in which an advanced parametric modeling method has been implemented. The proposed method is applied on the signals, extracted from simulated gearbox for detection of the simulated gear faults. In addition, the method is used for quality inspection of the produced Nissan-Junior vehicle gearbox by gear profile error detection in an industrial test bed. For evaluation purpose, the proposed method is compared with the previous parametric TAR/AR-based filters in which the parametric model residual is considered as the filter output and also Yule-Walker and Kalman filter are implemented for estimating the parameters. The results confirm the high performance of the new proposed fault detection method.

  9. Early Homo and the role of the genus in paleoanthropology.

    PubMed

    Villmoare, Brian

    2018-01-01

    The history of the discovery of early fossils attributed to the genus Homo has been contentious, with scholars disagreeing over the generic assignment of fossils proposed as members of our genus. In this manuscript I review the history of discovery and debate over early Homo and evaluate the various taxonomic hypotheses for the genus. To get a sense of how hominin taxonomy compares to taxonomic practice outside paleoanthropology, I compare the diversity of Homo to genera in other vertebrate clades. Finally, I propose a taxonomic model that hews closely to current models for hominin phylogeny and is consistent with taxonomic practice across evolutionary biology. © 2018 American Association of Physical Anthropologists.

  10. Quantifying Parkinson's disease progression by simulating gait patterns

    NASA Astrophysics Data System (ADS)

    Cárdenas, Luisa; Martínez, Fabio; Atehortúa, Angélica; Romero, Eduardo

    2015-12-01

    Modern rehabilitation protocols of most neurodegenerative diseases, in particular the Parkinson Disease, rely on a clinical analysis of gait patterns. Currently, such analysis is highly dependent on both the examiner expertise and the type of evaluation. Development of evaluation methods with objective measures is then crucial. Physical models arise as a powerful alternative to quantify movement patterns and to emulate the progression and performance of specific treatments. This work introduces a novel quantification of the Parkinson disease progression using a physical model that accurately represents the main gait biomarker, the body Center of Gravity (CoG). The model tracks the whole gait cycle by a coupled double inverted pendulum that emulates the leg swinging for the single support phase and by a damper-spring System (SDP) that recreates both legs in contact with the ground for the double phase. The patterns generated by the proposed model are compared with actual ones learned from 24 subjects in stages 2,3, and 4. The evaluation performed demonstrates a better performance of the proposed model when compared with a baseline model(SP) composed of a coupled double pendulum and a mass-spring system. The Frechet distance measured differences between model estimations and real trajectories, showing for stages 2, 3 and 4 distances of 0.137, 0.155, 0.38 for the baseline and 0.07, 0.09, 0.29 for the proposed method.

  11. Correcting for Measurement Error in Time-Varying Covariates in Marginal Structural Models.

    PubMed

    Kyle, Ryan P; Moodie, Erica E M; Klein, Marina B; Abrahamowicz, Michał

    2016-08-01

    Unbiased estimation of causal parameters from marginal structural models (MSMs) requires a fundamental assumption of no unmeasured confounding. Unfortunately, the time-varying covariates used to obtain inverse probability weights are often error-prone. Although substantial measurement error in important confounders is known to undermine control of confounders in conventional unweighted regression models, this issue has received comparatively limited attention in the MSM literature. Here we propose a novel application of the simulation-extrapolation (SIMEX) procedure to address measurement error in time-varying covariates, and we compare 2 approaches. The direct approach to SIMEX-based correction targets outcome model parameters, while the indirect approach corrects the weights estimated using the exposure model. We assess the performance of the proposed methods in simulations under different clinically plausible assumptions. The simulations demonstrate that measurement errors in time-dependent covariates may induce substantial bias in MSM estimators of causal effects of time-varying exposures, and that both proposed SIMEX approaches yield practically unbiased estimates in scenarios featuring low-to-moderate degrees of error. We illustrate the proposed approach in a simple analysis of the relationship between sustained virological response and liver fibrosis progression among persons infected with hepatitis C virus, while accounting for measurement error in γ-glutamyltransferase, using data collected in the Canadian Co-infection Cohort Study from 2003 to 2014. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. Distance measurement based on light field geometry and ray tracing.

    PubMed

    Chen, Yanqin; Jin, Xin; Dai, Qionghai

    2017-01-09

    In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects' imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.

  13. Parametric representation of weld fillets using shell finite elements—a proposal based on minimum stiffness and inertia errors

    NASA Astrophysics Data System (ADS)

    Echer, L.; Marczak, R. J.

    2018-02-01

    The objective of the present work is to introduce a methodology capable of modelling welded components for structural stress analysis. The modelling technique was based on the recommendations of the International Institute of Welding; however, some geometrical features of the weld fillet were used as design parameters in an optimization problem. Namely, the weld leg length and thickness of the shell elements representing the weld fillet were optimized in such a way that the first natural frequencies were not changed significantly when compared to a reference result. Sequential linear programming was performed for T-joint structures corresponding to two different structural details: with and without full penetration weld fillets. Both structural details were tested in scenarios of various plate thicknesses and depths. Once the optimal parameters were found, a modelling procedure was proposed for T-shaped components. Furthermore, the proposed modelling technique was extended for overlapped welded joints. The results obtained were compared to well-established methodologies presented in standards and in the literature. The comparisons included results for natural frequencies, total mass and structural stress. By these comparisons, it was observed that some established practices produce significant errors in the overall stiffness and inertia. The methodology proposed herein does not share this issue and can be easily extended to other types of structure.

  14. Impairment assessment of orthogonal frequency division multiplexing over dispersion-managed links in backbone and backhaul networks

    NASA Astrophysics Data System (ADS)

    Tamilarasan, Ilavarasan; Saminathan, Brindha; Murugappan, Meenakshi

    2016-04-01

    The past decade has seen the phenomenal usage of orthogonal frequency division multiplexing (OFDM) in the wired as well as wireless communication domains, and it is also proposed in the literature as a future proof technique for the implementation of flexible resource allocation in cognitive optical networks. Fiber impairment assessment and adaptive compensation becomes critical in such implementations. A comprehensive analytical model for impairments in OFDM-based fiber links is developed. The proposed model includes the combined impact of laser phase fluctuations, fiber dispersion, self phase modulation, cross phase modulation, four-wave mixing, the nonlinear phase noise due to the interaction of amplified spontaneous emission with fiber nonlinearities, and the photodetector noises. The bit error rate expression for the proposed model is derived based on error vector magnitude estimation. The performance analysis of the proposed model is presented and compared for dispersion compensated and uncompensated backbone/backhaul links. The results suggest that OFDM would perform better for uncompensated links than the compensated links due to the negligible FWM effects and there is a need for flexible compensation. The proposed model can be employed in cognitive optical networks for accurate assessment of fiber-related impairments.

  15. Comparative analysis of stress in a new proposal of dental implants.

    PubMed

    Valente, Mariana Lima da Costa; de Castro, Denise Tornavoi; Macedo, Ana Paula; Shimano, Antonio Carlos; Dos Reis, Andréa Cândido

    2017-08-01

    The purpose of this study was to compare, through photoelastic analysis, the stress distribution around conventional and modified external hexagon (EH) and morse taper (MT) dental implant connections. Four photoelastic models were prepared (n=1): Model 1 - conventional EH cylindrical implant (Ø 4.0mm×11mm - Neodent®), Model 2 - modified EH cylindrical implant, Model 3 - conventional MT Conical implant (Ø 4.3mm×10mm - Neodent®) and Model 4 - modified MT conical implant. 100 and 150N axial and oblique loads (30° tilt) were applied in the devices coupled to the implants. A plane transmission polariscope was used in the analysis of fringes and each position of interest was recorded by a digital camera. The Tardy method was used to quantify the fringe order (n), that calculates the maximum shear stress (τ) value in each selected point. The results showed lower stress concentration in the modified cylindrical implant (EH) compared to the conventional model, with application of 150N axial and 100N oblique loads. Lower stress was observed for the modified conical (MT) implant with the application of 100 and 150N oblique loads, which was not observed for the conventional implant model. The comparative analysis of the models showed that the new design proposal generates good stress distribution, especially in the cervical third, suggesting the preservation of bone tissue in the bone crest region. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Deep hierarchical attention network for video description

    NASA Astrophysics Data System (ADS)

    Li, Shuohao; Tang, Min; Zhang, Jun

    2018-03-01

    Pairing video to natural language description remains a challenge in computer vision and machine translation. Inspired by image description, which uses an encoder-decoder model for reducing visual scene into a single sentence, we propose a deep hierarchical attention network for video description. The proposed model uses convolutional neural network (CNN) and bidirectional LSTM network as encoders while a hierarchical attention network is used as the decoder. Compared to encoder-decoder models used in video description, the bidirectional LSTM network can capture the temporal structure among video frames. Moreover, the hierarchical attention network has an advantage over single-layer attention network on global context modeling. To make a fair comparison with other methods, we evaluate the proposed architecture with different types of CNN structures and decoders. Experimental results on the standard datasets show that our model has a more superior performance than the state-of-the-art techniques.

  17. Modelling of double air-bridged structured inductor implemented by a GaAs integrated passive device manufacturing process

    NASA Astrophysics Data System (ADS)

    Li, Yang; Yao, Zhao; Zhang, Chun-Wei; Fu, Xiao-Qian; Li, Zhi-Ming; Li, Nian-Qiang; Wang, Cong

    2017-05-01

    In order to provide excellent performance and show the development of a complicated structure in a module and system, this paper presents a double air-bridge-structured symmetrical differential inductor based on integrated passive device technology. Corresponding to the proposed complicated structure, a new manufacturing process fabricated on a high-resistivity GaAs substrate is described in detail. Frequency-independent physical models are presented with lump elements and the results of skin effect-based measurements. Finally, some key features of the inductor are compared; good agreement between the measurements and modeled circuit fully verifies the validity of the proposed modeling approach. Meanwhile, we also present a comparison of different coil turns for inductor performance. The proposed work can provide a good solution for the design, fabrication, modeling, and practical application of radio-frequency modules and systems.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavignet, A.A.; Sobey, I.J.

    Drilling of highly deviated wells can be complicated by the formation of a thick bed of cuttings at low flow rates. The model proposed in this paper shows what mechanisms control the thickness of such a bed, and the model predictions are compared with experimental results.

  19. Maxwell-Stefan diffusion coefficient estimation for ternary systems: an ideal ternary alcohol system.

    PubMed

    Allie-Ebrahim, Tariq; Zhu, Qingyu; Bräuer, Pierre; Moggridge, Geoff D; D'Agostino, Carmine

    2017-06-21

    The Maxwell-Stefan model is a popular diffusion model originally developed to model diffusion of gases, which can be considered thermodynamically ideal mixtures, although its application has been extended to model diffusion in non-ideal liquid mixtures as well. A drawback of the model is that it requires the Maxwell-Stefan diffusion coefficients, which are not based on measurable quantities but they have to be estimated. As a result, numerous estimation methods, such as the Darken model, have been proposed to estimate these diffusion coefficients. However, the Darken model was derived, and is only well defined, for binary systems. This model has been extended to ternary systems according to two proposed forms, one by R. Krishna and J. M. van Baten, Ind. Eng. Chem. Res., 2005, 44, 6939-6947 and the other by X. Liu, T. J. H. Vlugt and A. Bardow, Ind. Eng. Chem. Res., 2011, 50, 10350-10358. In this paper, the two forms have been analysed against the ideal ternary system of methanol/butan-1-ol/propan-1-ol and using experimental values of self-diffusion coefficients. In particular, using pulsed gradient stimulated echo nuclear magnetic resonance (PGSTE-NMR) we have measured the self-diffusion coefficients in various methanol/butan-1-ol/propan-1-ol mixtures. The experimental values of self-diffusion coefficients were then used as the input data required for the Darken model. The predictions of the two proposed multicomponent forms of this model were then compared to experimental values of mutual diffusion coefficients for the ideal alcohol ternary system. This experimental-based approach showed that the Liu's model gives better predictions compared to that of Krishna and van Baten, although it was only accurate to within 26%. Nonetheless, the multicomponent Darken model in conjunction with self-diffusion measurements from PGSTE-NMR represents an attractive method for a rapid estimation of mutual diffusion in multicomponent systems, especially when compared to exhaustive MD simulations.

  20. A first step to compare geodynamical models and seismic observations of the inner core

    NASA Astrophysics Data System (ADS)

    Lasbleis, M.; Waszek, L.; Day, E. A.

    2016-12-01

    Seismic observations have revealed a complex inner core, with lateral and radial heterogeneities at all observable scales. The dominant feature is the east-west hemispherical dichotomy in seismic velocity and attenuation. Several geodynamical models have been proposed to explain the observed structure: convective instabilities, external forces, crystallisation processes or influence of outer core convection. However, interpreting such geodynamical models in terms of the seismic observations is difficult, and has been performed only for very specific models (Geballe 2013, Lincot 2014, 2016). Here, we propose a common framework to make such comparisons. We have developed a Python code that propagates seismic ray paths through kinematic geodynamical models for the inner core, computing a synthetic seismic data set that can be compared to seismic observations. Following the method of Geballe 2013, we start with the simple model of translation. For this, the seismic velocity is proposed to be function of the age or initial growth rate of the material (since there is no deformation included in our models); the assumption is reasonable when considering translation, growth and super rotation of the inner core. Using both artificial (random) seismic ray data sets and a real inner core data set (from Waszek et al. 2011), we compare these different models. Our goal is to determine the model which best matches the seismic observations. Preliminary results show that super rotation successfully creates an eastward shift in properties with depth, as has been observed seismically. Neither the growth rate of inner core material nor the relationship between crystal size and seismic velocity are well constrained. Consequently our method does not directly compute the seismic travel times. Instead, here we use age, growth rate and other parameters as proxies for the seismic properties, which represent a good first step to compare geodynamical and seismic observations.Ultimately we aim to release our codes to broader scientific community, allowing researchers from all disciplines to test their models of inner core growth against seismic observations or create a kinematic model for the evolution of the inner core which matches new geophysical observations.

  1. Optimization of finite difference forward modeling for elastic waves based on optimum combined window functions

    NASA Astrophysics Data System (ADS)

    Jian, Wang; Xiaohong, Meng; Hong, Liu; Wanqiu, Zheng; Yaning, Liu; Sheng, Gui; Zhiyang, Wang

    2017-03-01

    Full waveform inversion and reverse time migration are active research areas for seismic exploration. Forward modeling in the time domain determines the precision of the results, and numerical solutions of finite difference have been widely adopted as an important mathematical tool for forward modeling. In this article, the optimum combined of window functions was designed based on the finite difference operator using a truncated approximation of the spatial convolution series in pseudo-spectrum space, to normalize the outcomes of existing window functions for different orders. The proposed combined window functions not only inherit the characteristics of the various window functions, to provide better truncation results, but also control the truncation error of the finite difference operator manually and visually by adjusting the combinations and analyzing the characteristics of the main and side lobes of the amplitude response. Error level and elastic forward modeling under the proposed combined system were compared with outcomes from conventional window functions and modified binomial windows. Numerical dispersion is significantly suppressed, which is compared with modified binomial window function finite-difference and conventional finite-difference. Numerical simulation verifies the reliability of the proposed method.

  2. An extended Kalman filter for mouse tracking.

    PubMed

    Choi, Hongjun; Kim, Mingi; Lee, Onseok

    2018-05-19

    Animal tracking is an important tool for observing behavior, which is useful in various research areas. Animal specimens can be tracked using dynamic models and observation models that require several types of data. Tracking mouse has several barriers due to the physical characteristics of the mouse, their unpredictable movement, and cluttered environments. Therefore, we propose a reliable method that uses a detection stage and a tracking stage to successfully track mouse. The detection stage detects the surface area of the mouse skin, and the tracking stage implements an extended Kalman filter to estimate the state variables of a nonlinear model. The changes in the overall shape of the mouse are tracked using an oval-shaped tracking model to estimate the parameters for the ellipse. An experiment is conducted to demonstrate the performance of the proposed tracking algorithm using six video images showing various types of movement, and the ground truth values for synthetic images are compared to the values generated by the tracking algorithm. A conventional manual tracking method is also applied to compare across eight experimenters. Furthermore, the effectiveness of the proposed tracking method is also demonstrated by applying the tracking algorithm with actual images of mouse. Graphical abstract.

  3. Multi-atlas label fusion using hybrid of discriminative and generative classifiers for segmentation of cardiac MR images.

    PubMed

    Sedai, Suman; Garnavi, Rahil; Roy, Pallab; Xi Liang

    2015-08-01

    Multi-atlas segmentation first registers each atlas image to the target image and transfers the label of atlas image to the coordinate system of the target image. The transferred labels are then combined, using a label fusion algorithm. In this paper, we propose a novel label fusion method which aggregates discriminative learning and generative modeling for segmentation of cardiac MR images. First, a probabilistic Random Forest classifier is trained as a discriminative model to obtain the prior probability of a label at the given voxel of the target image. Then, a probability distribution of image patches is modeled using Gaussian Mixture Model for each label, providing the likelihood of the voxel belonging to the label. The final label posterior is obtained by combining the classification score and the likelihood score under Bayesian rule. Comparative study performed on MICCAI 2013 SATA Segmentation Challenge demonstrates that our proposed hybrid label fusion algorithm is accurate than other five state-of-the-art label fusion methods. The proposed method obtains dice similarity coefficient of 0.94 and 0.92 in segmenting epicardium and endocardium respectively. Moreover, our label fusion method achieves more accurate segmentation results compared to four other label fusion methods.

  4. Developing a Long Short-Term Memory (LSTM) based model for predicting water table depth in agricultural areas

    NASA Astrophysics Data System (ADS)

    Zhang, Jianfeng; Zhu, Yan; Zhang, Xiaoping; Ye, Ming; Yang, Jinzhong

    2018-06-01

    Predicting water table depth over the long-term in agricultural areas presents great challenges because these areas have complex and heterogeneous hydrogeological characteristics, boundary conditions, and human activities; also, nonlinear interactions occur among these factors. Therefore, a new time series model based on Long Short-Term Memory (LSTM), was developed in this study as an alternative to computationally expensive physical models. The proposed model is composed of an LSTM layer with another fully connected layer on top of it, with a dropout method applied in the first LSTM layer. In this study, the proposed model was applied and evaluated in five sub-areas of Hetao Irrigation District in arid northwestern China using data of 14 years (2000-2013). The proposed model uses monthly water diversion, evaporation, precipitation, temperature, and time as input data to predict water table depth. A simple but effective standardization method was employed to pre-process data to ensure data on the same scale. 14 years of data are separated into two sets: training set (2000-2011) and validation set (2012-2013) in the experiment. As expected, the proposed model achieves higher R2 scores (0.789-0.952) in water table depth prediction, when compared with the results of traditional feed-forward neural network (FFNN), which only reaches relatively low R2 scores (0.004-0.495), proving that the proposed model can preserve and learn previous information well. Furthermore, the validity of the dropout method and the proposed model's architecture are discussed. Through experimentation, the results show that the dropout method can prevent overfitting significantly. In addition, comparisons between the R2 scores of the proposed model and Double-LSTM model (R2 scores range from 0.170 to 0.864), further prove that the proposed model's architecture is reasonable and can contribute to a strong learning ability on time series data. Thus, one can conclude that the proposed model can serve as an alternative approach predicting water table depth, especially in areas where hydrogeological data are difficult to obtain.

  5. A New Model of Teaching Pedagogy in CHISEL for the 21th Century.

    ERIC Educational Resources Information Center

    Huang, Li-yi

    This paper describes and compares six models for teaching second languages developed and adopted since 1840 (grammar-translation, direct, structural, situational, audiolingual, and communicative methods), and proposes a seventh, the cognitive-linguistic method, incorporating Noam Chomsky's theory of learning. The model takes both extralinguistic…

  6. Averaging Models: Parameters Estimation with the R-Average Procedure

    ERIC Educational Resources Information Center

    Vidotto, G.; Massidda, D.; Noventa, S.

    2010-01-01

    The Functional Measurement approach, proposed within the theoretical framework of Information Integration Theory (Anderson, 1981, 1982), can be a useful multi-attribute analysis tool. Compared to the majority of statistical models, the averaging model can account for interaction effects without adding complexity. The R-Average method (Vidotto &…

  7. Models of Quantitative Estimations: Rule-Based and Exemplar-Based Processes Compared

    ERIC Educational Resources Information Center

    von Helversen, Bettina; Rieskamp, Jorg

    2009-01-01

    The cognitive processes underlying quantitative estimations vary. Past research has identified task-contingent changes between rule-based and exemplar-based processes (P. Juslin, L. Karlsson, & H. Olsson, 2008). B. von Helversen and J. Rieskamp (2008), however, proposed a simple rule-based model--the mapping model--that outperformed the…

  8. A Rational Analysis of Rule-Based Concept Learning

    ERIC Educational Resources Information Center

    Goodman, Noah D.; Tenenbaum, Joshua B.; Feldman, Jacob; Griffiths, Thomas L.

    2008-01-01

    This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space--a concept language of logical rules. This article compares the model predictions to human generalization judgments in several…

  9. Non Debye approximation on specific heat of solids

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Ruma; Das, Anamika; Sarkar, A.

    2018-05-01

    A simple non Debye frequency spectrum is proposed. The normalized frequency spectrum is compared to that of Debye spectrum. The proposed spectrum, provides a good account of low frequency phonon density of states, which gives a linear temperature variation at low temperature in contrast to Debye T3 law. It has been analyzed that the proposed model provides a good account of excess specific heat for nanostructure solid.

  10. An augmented classical least squares method for quantitative Raman spectral analysis against component information loss.

    PubMed

    Zhou, Yan; Cao, Hui

    2013-01-01

    We propose an augmented classical least squares (ACLS) calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV) curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS) and principal component regression (PCR) using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA) was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  11. A Lumped Computational Model for Sodium Sulfur Battery Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Fan

    Due to the cost of materials and time consuming testing procedures, development of new batteries is a slow and expensive practice. The purpose of this study is to develop a computational model and assess the capabilities of such a model designed to aid in the design process and control of sodium sulfur batteries. To this end, a transient lumped computational model derived from an integral analysis of the transport of species, energy and charge throughout the battery has been developed. The computation processes are coupled with the use of Faraday's law, and solutions for the species concentrations, electrical potential and current are produced in a time marching fashion. Properties required for solving the governing equations are calculated and updated as a function of time based on the composition of each control volume. The proposed model is validated against multi- dimensional simulations and experimental results from literatures, and simulation results using the proposed model is presented and analyzed. The computational model and electrochemical model used to solve the equations for the lumped model are compared with similar ones found in the literature. The results obtained from the current model compare favorably with those from experiments and other models.

  12. Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset

    NASA Astrophysics Data System (ADS)

    Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi

    2017-11-01

    Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.

  13. PSO-MISMO modeling strategy for multistep-ahead time series prediction.

    PubMed

    Bao, Yukun; Xiong, Tao; Hu, Zhongyi

    2014-05-01

    Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.

  14. Adaptive time-variant models for fuzzy-time-series forecasting.

    PubMed

    Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching

    2010-12-01

    A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.

  15. Plane stress problems using hysteretic rigid body spring network models

    NASA Astrophysics Data System (ADS)

    Christos, Sofianos D.; Vlasis, Koumousis K.

    2017-10-01

    In this work, a discrete numerical scheme is presented capable of modeling the hysteretic behavior of 2D structures. Rigid Body Spring Network (RBSN) models that were first proposed by Kawai (Nucl Eng Des 48(1):29-207, 1978) are extended to account for hysteretic elastoplastic behavior. Discretization is based on Voronoi tessellation, as proposed specifically for RBSN models to ensure uniformity. As a result, the structure is discretized into convex polygons that form the discrete rigid bodies of the model. These are connected with three zero length, i.e., single-node springs in the middle of their common facets. The springs follow the smooth hysteretic Bouc-Wen model which efficiently incorporates classical plasticity with no direct reference to a yield surface. Numerical results for both static and dynamic loadings are presented, which validate the proposed simplified spring-mass formulation. In addition, they verify the model's applicability on determining primarily the displacement field and plastic zones compared to the standard elastoplastic finite element method.

  16. Comparing and combining biomarkers as principle surrogates for time-to-event clinical endpoints.

    PubMed

    Gabriel, Erin E; Sachs, Michael C; Gilbert, Peter B

    2015-02-10

    Principal surrogate endpoints are useful as targets for phase I and II trials. In many recent trials, multiple post-randomization biomarkers are measured. However, few statistical methods exist for comparison of or combination of biomarkers as principal surrogates, and none of these methods to our knowledge utilize time-to-event clinical endpoint information. We propose a Weibull model extension of the semi-parametric estimated maximum likelihood method that allows for the inclusion of multiple biomarkers in the same risk model as multivariate candidate principal surrogates. We propose several methods for comparing candidate principal surrogates and evaluating multivariate principal surrogates. These include the time-dependent and surrogate-dependent true and false positive fraction, the time-dependent and the integrated standardized total gain, and the cumulative distribution function of the risk difference. We illustrate the operating characteristics of our proposed methods in simulations and outline how these statistics can be used to evaluate and compare candidate principal surrogates. We use these methods to investigate candidate surrogates in the Diabetes Control and Complications Trial. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Affective topic model for social emotion detection.

    PubMed

    Rao, Yanghui; Li, Qing; Wenyin, Liu; Wu, Qingyuan; Quan, Xiaojun

    2014-10-01

    The rapid development of social media services has been a great boon for the communication of emotions through blogs, microblogs/tweets, instant-messaging tools, news portals, and so forth. This paper is concerned with the detection of emotions evoked in a reader by social media. Compared to classical sentiment analysis conducted from the writer's perspective, analysis from the reader's perspective can be more meaningful when applied to social media. We propose an affective topic model with the intention to bridge the gap between social media materials and a reader's emotions by introducing an intermediate layer. The proposed model can be used to classify the social emotions of unlabeled documents and to generate a social emotion lexicon. Extensive evaluations using real-world data validate the effectiveness of the proposed model for both these applications. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. PRIM: An Efficient Preconditioning Iterative Reweighted Least Squares Method for Parallel Brain MRI Reconstruction.

    PubMed

    Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou

    2018-02-08

    The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.

  19. Construction and identification of a D-Vine model applied to the probability distribution of modal parameters in structural dynamics

    NASA Astrophysics Data System (ADS)

    Dubreuil, S.; Salaün, M.; Rodriguez, E.; Petitjean, F.

    2018-01-01

    This study investigates the construction and identification of the probability distribution of random modal parameters (natural frequencies and effective parameters) in structural dynamics. As these parameters present various types of dependence structures, the retained approach is based on pair copula construction (PCC). A literature review leads us to choose a D-Vine model for the construction of modal parameters probability distributions. Identification of this model is based on likelihood maximization which makes it sensitive to the dimension of the distribution, namely the number of considered modes in our context. To this respect, a mode selection preprocessing step is proposed. It allows the selection of the relevant random modes for a given transfer function. The second point, addressed in this study, concerns the choice of the D-Vine model. Indeed, D-Vine model is not uniquely defined. Two strategies are proposed and compared. The first one is based on the context of the study whereas the second one is purely based on statistical considerations. Finally, the proposed approaches are numerically studied and compared with respect to their capabilities, first in the identification of the probability distribution of random modal parameters and second in the estimation of the 99 % quantiles of some transfer functions.

  20. An improved shuffled frog leaping algorithm based evolutionary framework for currency exchange rate prediction

    NASA Astrophysics Data System (ADS)

    Dash, Rajashree

    2017-11-01

    Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.

  1. Reduction of initial shock in decadal predictions using a new initialization strategy

    NASA Astrophysics Data System (ADS)

    He, Yujun; Wang, Bin; Liu, Mimi; Liu, Li; Yu, Yongqiang; Liu, Juanjuan; Li, Ruizhe; Zhang, Cheng; Xu, Shiming; Huang, Wenyu; Liu, Qun; Wang, Yong; Li, Feifei

    2017-08-01

    A novel full-field initialization strategy based on the dimension-reduced projection four-dimensional variational data assimilation (DRP-4DVar) is proposed to alleviate the well-known initial shock occurring in the early years of decadal predictions. It generates consistent initial conditions, which best fit the monthly mean oceanic analysis data along the coupled model trajectory in 1 month windows. Three indices to measure the initial shock intensity are also proposed. Results indicate that this method does reduce the initial shock in decadal predictions by Flexible Global Ocean-Atmosphere-Land System model, Grid-point version 2 (FGOALS-g2) compared with the three-dimensional variational data assimilation-based nudging full-field initialization for the same model and is comparable to or even better than the different initialization strategies for other fifth phase of the Coupled Model Intercomparison Project (CMIP5) models. Better hindcasts of global mean surface air temperature anomalies can be obtained than in other FGOALS-g2 experiments. Due to the good model response to external forcing and the reduction of initial shock, higher decadal prediction skill is achieved than in other CMIP5 models.

  2. A novel 2.5D finite difference scheme for simulations of resistivity logging in anisotropic media

    NASA Astrophysics Data System (ADS)

    Zeng, Shubin; Chen, Fangzhou; Li, Dawei; Chen, Ji; Chen, Jiefu

    2018-03-01

    The objective of this study is to develop a method to model 3D resistivity well logging problems in 2D formation with anisotropy, known as 2.5D modeling. The traditional 1D forward modeling extensively used in practice lacks the capability of modeling 2D formation. A 2.5D finite difference method (FDM) solving all the electric and magnetic field components simultaneously is proposed. Compared to other previous 2.5D FDM schemes, this method is more straightforward in modeling fully anisotropic media and easy to be implemented. Fourier transform is essential to this FDM scheme, and by employing Gauss-Legendre (GL) quadrature rule the computational time of this step can be greatly reduced. In the numerical examples, we first demonstrate the validity of the FDM scheme with GL rule by comparing with 1D forward modeling for layered anisotropic problems, and then we model a complicated 2D formation case and find that the proposed 2.5D FD scheme is much more efficient than 3D numerical methods.

  3. A Model for the Determination of the Costs of Special Education as Compared with That for General Education. Appendix: Part 2.

    ERIC Educational Resources Information Center

    Ernst and Ernst, Chicago, IL.

    Part 2 of the appendix to "A Model for the Determination of the Costs of Special Education as Compared with That for General Education" contains information on using 10-minute units of service measure in Ernstville, a hypothetical school district conceived to illustrate operation of a proposed cost accounting system. (LH)

  4. A Model for the Determination of the Costs of Special Education as Compared with That for General Education. Appendix: Part 1.

    ERIC Educational Resources Information Center

    Ernst and Ernst, Chicago, IL.

    Part 1 of the appendix to "A Model for the Determination of the Costs of Special Education as Compared with That for General Education" contains comprehensive descriptive and statistical information on Ernstville, a hypothetical school district conceived to illustrate the operation of a proposed cost accounting system. Included are sections on…

  5. Topology preserving non-rigid image registration using time-varying elasticity model for MRI brain volumes.

    PubMed

    Ahmad, Sahar; Khan, Muhammad Faisal

    2015-12-01

    In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Quasi-Chemical PC-SAFT: An Extended Perturbed Chain-Statistical Associating Fluid Theory for Lattice-Fluid Mixtures.

    PubMed

    Parvaneh, Khalil; Shariati, Alireza

    2017-09-07

    In this study, a new modification of the perturbed chain-statistical associating fluid theory (PC-SAFT) has been proposed by incorporating the lattice fluid theory of Guggenheim as an additional term to the original PC-SAFT terms. As the proposed model has one more term than the PC-SAFT, a new mixing rule has been developed especially for the new additional term, while for the conventional terms of the PC-SAFT, the one-fluid mixing rule is used. In order to evaluate the proposed model, the vapor-liquid equilibria were estimated for binary CO 2 mixtures with 16 different ionic liquids (ILs) of the 1-alkyl-3-methylimidazolium family with various anions consisting of bis(trifluoromethylsulfonyl) imide, hexafluorophosphate, tetrafluoroborate, and trifluoromethanesulfonate. For a comprehensive comparison, three different modes (different adjustable parameters) of the proposed model were compared with the conventional PC-SAFT. Results indicate that the proposed modification of the PC-SAFT EoS is generally more reliable with respect to the conventional PC-SAFT in all the three proposed modes of vapor-liquid equilibria, giving good agreement with literature data.

  7. Adaptive estimation of state of charge and capacity with online identified battery model for vanadium redox flow battery

    NASA Astrophysics Data System (ADS)

    Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria

    2016-11-01

    Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.

  8. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    PubMed

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  9. Sorting protein decoys by machine-learning-to-rank

    PubMed Central

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-01-01

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967

  10. Sorting protein decoys by machine-learning-to-rank.

    PubMed

    Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen

    2016-08-17

    Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.

  11. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  12. Accuracy comparison among different machine learning techniques for detecting malicious codes

    NASA Astrophysics Data System (ADS)

    Narang, Komal

    2016-03-01

    In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.

  13. A coupling method for a cardiovascular simulation model which includes the Kalman filter.

    PubMed

    Hasegawa, Yuki; Shimayoshi, Takao; Amano, Akira; Matsuda, Tetsuya

    2012-01-01

    Multi-scale models of the cardiovascular system provide new insight that was unavailable with in vivo and in vitro experiments. For the cardiovascular system, multi-scale simulations provide a valuable perspective in analyzing the interaction of three phenomenons occurring at different spatial scales: circulatory hemodynamics, ventricular structural dynamics, and myocardial excitation-contraction. In order to simulate these interactions, multiscale cardiovascular simulation systems couple models that simulate different phenomena. However, coupling methods require a significant amount of calculation, since a system of non-linear equations must be solved for each timestep. Therefore, we proposed a coupling method which decreases the amount of calculation by using the Kalman filter. In our method, the Kalman filter calculates approximations for the solution to the system of non-linear equations at each timestep. The approximations are then used as initial values for solving the system of non-linear equations. The proposed method decreases the number of iterations required by 94.0% compared to the conventional strong coupling method. When compared with a smoothing spline predictor, the proposed method required 49.4% fewer iterations.

  14. Robust sensor fault detection and isolation of gas turbine engines subjected to time-varying parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Pourbabaee, Bahareh; Meskin, Nader; Khorasani, Khashayar

    2016-08-01

    In this paper, a novel robust sensor fault detection and isolation (FDI) strategy using the multiple model-based (MM) approach is proposed that remains robust with respect to both time-varying parameter uncertainties and process and measurement noise in all the channels. The scheme is composed of robust Kalman filters (RKF) that are constructed for multiple piecewise linear (PWL) models that are constructed at various operating points of an uncertain nonlinear system. The parameter uncertainty is modeled by using a time-varying norm bounded admissible structure that affects all the PWL state space matrices. The robust Kalman filter gain matrices are designed by solving two algebraic Riccati equations (AREs) that are expressed as two linear matrix inequality (LMI) feasibility conditions. The proposed multiple RKF-based FDI scheme is simulated for a single spool gas turbine engine to diagnose various sensor faults despite the presence of parameter uncertainties, process and measurement noise. Our comparative studies confirm the superiority of our proposed FDI method when compared to the methods that are available in the literature.

  15. New 2D diffraction model and its applications to terahertz parallel-plate waveguide power splitters

    PubMed Central

    Zhang, Fan; Song, Kaijun; Fan, Yong

    2017-01-01

    A two-dimensional (2D) diffraction model for the calculation of the diffraction field in 2D space and its applications to terahertz parallel-plate waveguide power splitters are proposed in this paper. Compared with the Huygens-Fresnel principle in three-dimensional (3D) space, the proposed model provides an approximate analytical expression to calculate the diffraction field in 2D space. The diffraction filed is regarded as the superposition integral in 2D space. The calculated results obtained from the proposed diffraction model agree well with the ones by software HFSS based on the element method (FEM). Based on the proposed 2D diffraction model, two parallel-plate waveguide power splitters are presented. The splitters consist of a transmitting horn antenna, reflectors, and a receiving antenna array. The reflector is cylindrical parabolic with superimposed surface relief to efficiently couple the transmitted wave into the receiving antenna array. The reflector is applied as computer-generated holograms to match the transformed field to the receiving antenna aperture field. The power splitters were optimized by a modified real-coded genetic algorithm. The computed results of the splitters agreed well with the ones obtained by software HFSS verify the novel design method for power splitter, which shows good applied prospects of the proposed 2D diffraction model. PMID:28181514

  16. SU-E-I-25: Performance Evaluation of a Proposed CMOS-Based X-Ray Detector Using Linear Cascade Model Analysis.

    PubMed

    Jain, A; Bednarek, D; Rudin, S

    2012-06-01

    The need for high-resolution, dynamic x-ray imaging capability for neurovascular applications has put an ever increasing demand on x-ray detector technology. Present state-of-the-art detectors such as flat panels have limited resolution and noise performance. A linear cascade model analysis was used to estimate the theoretical performance for a proposed CMOS-based detector. The proposed CMOS-based detector was assumed to have a 300-micron thick HL type CsI phosphor, 35-micron pixels, a variable gain light image intensifier (LU), and 400 electron readout noise. The proposed detector has a CMOS sensor coupled to an LII which views the output of the CsI phosphor. For the analysis the whole imaging chain was divided into individual stages characterized by one of the basic processes (stochastic/deterministic blurring, binomial selection, quantum gain, additive noise). Standard linear cascade modeling was used for the propagation of signal and noise through the stages and an RQA5 spectrum was assumed. The gain, blurring or transmission of different stages was either measured or taken from manufacturer's specifications. The theoretically calculated MTF and DQE for the proposed detector were compared with a high-resolution, high-sensitive Micro-Angio Fluoroscope (MAF), predecessor of the proposed detector. Signal and noise for each of the 19 stages in the complete imaging chain were calculated and showed improved performance. For example, at 5 cycles/mm the MTF and DQE were 0.08 and 0.28, respectively, for the CMOS detector compared to 0.05 and 0.07 for the MAF detector. The proposed detector will have improved MTF and DQE and slimmer physical dimension due to the elimination of the large fiber-optic taper used in the MAF. Once operational, the proposed CMOS detector will serve as a further improvement over standard flat panel detectors compared to the MAF which is already receiving a very positive reception by neuro-vascular interventionalists. (Support:NIH-Grant R01EB002873) NIH Grants R01- EB008425, R01-EB002873 and an equipment grant from Toshiba Medical Systems Corp. © 2012 American Association of Physicists in Medicine.

  17. Nonlinear quantum Rabi model in trapped ions

    NASA Astrophysics Data System (ADS)

    Cheng, Xiao-Hang; Arrazola, Iñigo; Pedernales, Julen S.; Lamata, Lucas; Chen, Xi; Solano, Enrique

    2018-02-01

    We study the nonlinear dynamics of trapped-ion models far away from the Lamb-Dicke regime. This nonlinearity induces a blockade on the propagation of quantum information along the Hilbert space of the Jaynes-Cummings and quantum Rabi models. We propose to use this blockade as a resource for the dissipative generation of high-number Fock states. Also, we compare the linear and nonlinear cases of the quantum Rabi model in the ultrastrong and deep strong-coupling regimes. Moreover, we propose a scheme to simulate the nonlinear quantum Rabi model in all coupling regimes. This can be done via off-resonant nonlinear red- and blue-sideband interactions in a single trapped ion, yielding applications as a dynamical quantum filter.

  18. Direct estimation of tracer-kinetic parameter maps from highly undersampled brain dynamic contrast enhanced MRI.

    PubMed

    Guo, Yi; Lingala, Sajan Goud; Zhu, Yinghua; Lebel, R Marc; Nayak, Krishna S

    2017-10-01

    The purpose of this work was to develop and evaluate a T 1 -weighted dynamic contrast enhanced (DCE) MRI methodology where tracer-kinetic (TK) parameter maps are directly estimated from undersampled (k,t)-space data. The proposed reconstruction involves solving a nonlinear least squares optimization problem that includes explicit use of a full forward model to convert parameter maps to (k,t)-space, utilizing the Patlak TK model. The proposed scheme is compared against an indirect method that creates intermediate images by parallel imaging and compressed sensing before to TK modeling. Thirteen fully sampled brain tumor DCE-MRI scans with 5-second temporal resolution are retrospectively undersampled at rates R = 20, 40, 60, 80, and 100 for each dynamic frame. TK maps are quantitatively compared based on root mean-squared-error (rMSE) and Bland-Altman analysis. The approach is also applied to four prospectively R = 30 undersampled whole-brain DCE-MRI data sets. In the retrospective study, the proposed method performed statistically better than indirect method at R ≥ 80 for all 13 cases. This approach provided restoration of TK parameter values with less errors in tumor regions of interest, an improvement compared to a state-of-the-art indirect method. Applied prospectively, the proposed method provided whole-brain, high-resolution TK maps with good image quality. Model-based direct estimation of TK maps from k,t-space DCE-MRI data is feasible and is compatible up to 100-fold undersampling. Magn Reson Med 78:1566-1578, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. A novel convolution-based approach to address ionization chamber volume averaging effect in model-based treatment planning systems

    NASA Astrophysics Data System (ADS)

    Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua

    2015-08-01

    The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.

  20. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  1. The Heath Occupational Model.

    ERIC Educational Resources Information Center

    Heath, William E.

    1990-01-01

    Career development programs must identify occupational needs of adults. A model based on Maslow's hierarchy develops occupational questions related to individual motivations (physiology, safety, love, esteem, and self-actualization). Individual needs are then compared with characteristics and benefits of proposed jobs, companies, or careers. (SK)

  2. Interval type-2 fuzzy PID controller for uncertain nonlinear inverted pendulum system.

    PubMed

    El-Bardini, Mohammad; El-Nagar, Ahmad M

    2014-05-01

    In this paper, the interval type-2 fuzzy proportional-integral-derivative controller (IT2F-PID) is proposed for controlling an inverted pendulum on a cart system with an uncertain model. The proposed controller is designed using a new method of type-reduction that we have proposed, which is called the simplified type-reduction method. The proposed IT2F-PID controller is able to handle the effect of structure uncertainties due to the structure of the interval type-2 fuzzy logic system (IT2-FLS). The results of the proposed IT2F-PID controller using a new method of type-reduction are compared with the other proposed IT2F-PID controller using the uncertainty bound method and the type-1 fuzzy PID controller (T1F-PID). The simulation and practical results show that the performance of the proposed controller is significantly improved compared with the T1F-PID controller. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Sentiments Analysis of Reviews Based on ARCNN Model

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao

    2017-10-01

    The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.

  4. Dynamic Price Vector Formation Model-Based Automatic Demand Response Strategy for PV-Assisted EV Charging Stations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias

    A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less

  5. Bayesian model selection applied to artificial neural networks used for water resources modeling

    NASA Astrophysics Data System (ADS)

    Kingston, Greer B.; Maier, Holger R.; Lambert, Martin F.

    2008-04-01

    Artificial neural networks (ANNs) have proven to be extremely valuable tools in the field of water resources engineering. However, one of the most difficult tasks in developing an ANN is determining the optimum level of complexity required to model a given problem, as there is no formal systematic model selection method. This paper presents a Bayesian model selection (BMS) method for ANNs that provides an objective approach for comparing models of varying complexity in order to select the most appropriate ANN structure. The approach uses Markov Chain Monte Carlo posterior simulations to estimate the evidence in favor of competing models and, in this study, three known methods for doing this are compared in terms of their suitability for being incorporated into the proposed BMS framework for ANNs. However, it is acknowledged that it can be particularly difficult to accurately estimate the evidence of ANN models. Therefore, the proposed BMS approach for ANNs incorporates a further check of the evidence results by inspecting the marginal posterior distributions of the hidden-to-output layer weights, which unambiguously indicate any redundancies in the hidden layer nodes. The fact that this check is available is one of the greatest advantages of the proposed approach over conventional model selection methods, which do not provide such a test and instead rely on the modeler's subjective choice of selection criterion. The advantages of a total Bayesian approach to ANN development, including training and model selection, are demonstrated on two synthetic and one real world water resources case study.

  6. Application of a coupled smoothed particle hydrodynamics (SPH) and coarse-grained (CG) numerical modelling approach to study three-dimensional (3-D) deformations of single cells of different food-plant materials during drying.

    PubMed

    Rathnayaka, C M; Karunasena, H C P; Senadeera, W; Gu, Y T

    2018-03-14

    Numerical modelling has gained popularity in many science and engineering streams due to the economic feasibility and advanced analytical features compared to conventional experimental and theoretical models. Food drying is one of the areas where numerical modelling is increasingly applied to improve drying process performance and product quality. This investigation applies a three dimensional (3-D) Smoothed Particle Hydrodynamics (SPH) and Coarse-Grained (CG) numerical approach to predict the morphological changes of different categories of food-plant cells such as apple, grape, potato and carrot during drying. To validate the model predictions, experimental findings from in-house experimental procedures (for apple) and sources of literature (for grape, potato and carrot) have been utilised. The subsequent comaprison indicate that the model predictions demonstrate a reasonable agreement with the experimental findings, both qualitatively and quantitatively. In this numerical model, a higher computational accuracy has been maintained by limiting the consistency error below 1% for all four cell types. The proposed meshfree-based approach is well-equipped to predict the morphological changes of plant cellular structure over a wide range of moisture contents (10% to 100% dry basis). Compared to the previous 2-D meshfree-based models developed for plant cell drying, the proposed model can draw more useful insights on the morphological behaviour due to the 3-D nature of the model. In addition, the proposed computational modelling approach has a high potential to be used as a comprehensive tool in many other tissue morphology related investigations.

  7. An Evaluation of Some Models for Culture-Fair Selection.

    ERIC Educational Resources Information Center

    Petersen, Nancy S.; Novick, Melvin R.

    Models proposed by Cleary, Thorndike, Cole, Linn, Einhorn and Bass, Darlington, and Gross and Su for analyzing bias in the use of tests in a selection strategy are surveyed. Several additional models are also introduced. The purpose is to describe, compare, contrast, and evaluate these models while extracting such useful ideas as may be found in…

  8. Radiation-hardened MRAM-based LUT for non-volatile FPGA soft error mitigation with multi-node upset tolerance

    NASA Astrophysics Data System (ADS)

    Zand, Ramtin; DeMara, Ronald F.

    2017-12-01

    In this paper, we have developed a radiation-hardened non-volatile lookup table (LUT) circuit utilizing spin Hall effect (SHE)-magnetic random access memory (MRAM) devices. The design is motivated by modeling the effect of radiation particles striking hybrid complementary metal oxide semiconductor/spin based circuits, and the resistive behavior of SHE-MRAM devices via established and precise physics equations. The models developed are leveraged in the SPICE circuit simulator to verify the functionality of the proposed design. The proposed hardening technique is based on using feedback transistors, as well as increasing the radiation capacity of the sensitive nodes. Simulation results show that our proposed LUT circuit can achieve multiple node upset (MNU) tolerance with more than 38% and 60% power-delay product improvement as well as 26% and 50% reduction in device count compared to the previous energy-efficient radiation-hardened LUT designs. Finally, we have performed a process variation analysis showing that the MNU immunity of our proposed circuit is realized at the cost of increased susceptibility to transistor and MRAM variations compared to an unprotected LUT design.

  9. Zero-inflated Conway-Maxwell Poisson Distribution to Analyze Discrete Data.

    PubMed

    Sim, Shin Zhu; Gupta, Ramesh C; Ong, Seng Huat

    2018-01-09

    In this paper, we study the zero-inflated Conway-Maxwell Poisson (ZICMP) distribution and develop a regression model. Score and likelihood ratio tests are also implemented for testing the inflation/deflation parameter. Simulation studies are carried out to examine the performance of these tests. A data example is presented to illustrate the concepts. In this example, the proposed model is compared to the well-known zero-inflated Poisson (ZIP) and the zero- inflated generalized Poisson (ZIGP) regression models. It is shown that the fit by ZICMP is comparable or better than these models.

  10. Non-parametric identification of multivariable systems: A local rational modeling approach with application to a vibration isolation benchmark

    NASA Astrophysics Data System (ADS)

    Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom

    2018-05-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.

  11. Plasma sheath effects on ion collection by a pinhole

    NASA Technical Reports Server (NTRS)

    Herr, Joel L.; Snyder, David B.

    1993-01-01

    This work presents tables to assist in the evaluation of pinhole collection effects on spacecraft. These tables summarize results of a computer model which tracks particle trajectories through a simplified electric field in the plasma sheath. A technique is proposed to account for plasma sheath effects in the application of these results and scaling rules are proposed to apply the calculations to specific situations. This model is compared to ion current measurements obtained by another worker, and the agreement is very good.

  12. On the importance of incorporating sampling weights in ...

    EPA Pesticide Factsheets

    Occupancy models are used extensively to assess wildlife-habitat associations and to predict species distributions across large geographic regions. Occupancy models were developed as a tool to properly account for imperfect detection of a species. Current guidelines on survey design requirements for occupancy models focus on the number of sample units and the pattern of revisits to a sample unit within a season. We focus on the sampling design or how the sample units are selected in geographic space (e.g., stratified, simple random, unequal probability, etc). In a probability design, each sample unit has a sample weight which quantifies the number of sample units it represents in the finite (oftentimes areal) sampling frame. We demonstrate the importance of including sampling weights in occupancy model estimation when the design is not a simple random sample or equal probability design. We assume a finite areal sampling frame as proposed for a national bat monitoring program. We compare several unequal and equal probability designs and varying sampling intensity within a simulation study. We found the traditional single season occupancy model produced biased estimates of occupancy and lower confidence interval coverage rates compared to occupancy models that accounted for the sampling design. We also discuss how our findings inform the analyses proposed for the nascent North American Bat Monitoring Program and other collaborative synthesis efforts that propose h

  13. A novel Bayesian respiratory motion model to estimate and resolve uncertainty in image-guided cardiac interventions.

    PubMed

    Peressutti, Devis; Penney, Graeme P; Housden, R James; Kolbitsch, Christoph; Gomez, Alberto; Rijkhorst, Erik-Jan; Barratt, Dean C; Rhode, Kawal S; King, Andrew P

    2013-05-01

    In image-guided cardiac interventions, respiratory motion causes misalignments between the pre-procedure roadmap of the heart used for guidance and the intra-procedure position of the heart, reducing the accuracy of the guidance information and leading to potentially dangerous consequences. We propose a novel technique for motion-correcting the pre-procedural information that combines a probabilistic MRI-derived affine motion model with intra-procedure real-time 3D echocardiography (echo) images in a Bayesian framework. The probabilistic model incorporates a measure of confidence in its motion estimates which enables resolution of the potentially conflicting information supplied by the model and the echo data. Unlike models proposed so far, our method allows the final motion estimate to deviate from the model-produced estimate according to the information provided by the echo images, so adapting to the complex variability of respiratory motion. The proposed method is evaluated using gold-standard MRI-derived motion fields and simulated 3D echo data for nine volunteers and real 3D live echo images for four volunteers. The Bayesian method is compared to 5 other motion estimation techniques and results show mean/max improvements in estimation accuracy of 10.6%/18.9% for simulated echo images and 20.8%/41.5% for real 3D live echo data, over the best comparative estimation method. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing

    NASA Astrophysics Data System (ADS)

    Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin

    2017-06-01

    This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.

  15. Research on Fault Rate Prediction Method of T/R Component

    NASA Astrophysics Data System (ADS)

    Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu

    2017-07-01

    T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.

  16. Soil erosion assessment on hillslope of GCE using RUSLE model

    NASA Astrophysics Data System (ADS)

    Islam, Md. Rabiul; Jaafar, Wan Zurina Wan; Hin, Lai Sai; Osman, Normaniza; Din, Moktar Aziz Mohd; Zuki, Fathiah Mohamed; Srivastava, Prashant; Islam, Tanvir; Adham, Md. Ibrahim

    2018-06-01

    A new method for obtaining the C factor (i.e., vegetation cover and management factor) of the RUSLE model is proposed. The method focuses on the derivation of the C factor based on the vegetation density to obtain a more reliable erosion prediction. Soil erosion that occurs on the hillslope along the highway is one of the major problems in Malaysia, which is exposed to a relatively high amount of annual rainfall due to the two different monsoon seasons. As vegetation cover is one of the important factors in the RUSLE model, a new method that accounts for a vegetation density is proposed in this study. A hillslope near the Guthrie Corridor Expressway (GCE), Malaysia, is chosen as an experimental site whereby eight square plots with the size of 8× 8 and 5× 5 m are set up. A vegetation density available on these plots is measured by analyzing the taken image followed by linking the C factor with the measured vegetation density using several established formulas. Finally, erosion prediction is computed based on the RUSLE model in the Geographical Information System (GIS) platform. The C factor obtained by the proposed method is compared with that of the soil erosion guideline Malaysia, thereby predicted erosion is determined by both the C values. Result shows that the C value from the proposed method varies from 0.0162 to 0.125, which is lower compared to the C value from the soil erosion guideline, i.e., 0.8. Meanwhile predicted erosion computed from the proposed C value is between 0.410 and 3.925 t ha^{-1 } yr^{-1} compared to 9.367 to 34.496 t ha^{-1} yr^{-1 } range based on the C value of 0.8. It can be concluded that the proposed method of obtaining a reasonable C value is acceptable as the computed predicted erosion is found to be classified as a very low zone, i.e. less than 10 t ha^{-1 } yr^{-1} whereas the predicted erosion based on the guideline has classified the study area as a low zone of erosion, i.e., between 10 and 50 t ha^{-1 } yr^{-1}.

  17. A new discriminative kernel from probabilistic models.

    PubMed

    Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert

    2002-10-01

    Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.

  18. Predictor-Based Model Reference Adaptive Control

    NASA Technical Reports Server (NTRS)

    Lavretsky, Eugene; Gadient, Ross; Gregory, Irene M.

    2009-01-01

    This paper is devoted to robust, Predictor-based Model Reference Adaptive Control (PMRAC) design. The proposed adaptive system is compared with the now-classical Model Reference Adaptive Control (MRAC) architecture. Simulation examples are presented. Numerical evidence indicates that the proposed PMRAC tracking architecture has better than MRAC transient characteristics. In this paper, we presented a state-predictor based direct adaptive tracking design methodology for multi-input dynamical systems, with partially known dynamics. Efficiency of the design was demonstrated using short period dynamics of an aircraft. Formal proof of the reported PMRAC benefits constitute future research and will be reported elsewhere.

  19. Implementing ethics in the professions: examples from environmental epidemiology.

    PubMed

    Soskolne, Colin L; Sieswerda, Lee E

    2003-04-01

    The need to integrate ethics into professional life, from the grassroots up, has been recognized, and a comprehensive ethics program has been proposed as a model. The model includes the four dimensions of: consensus building, ethics guidelines development and review, education, and implementation. The activities of the International Society for Environmental Epidemiology (ISEE) are presented as examples and compared with the proposed model. Several innovative activities are described and incentives for ethical professional conduct are highlighted. The examples are provided for emulation by other professional organizations in the hope that, thereby, greater protection of the public interest will be achieved.

  20. Augmented Lagrange Hopfield network for solving economic dispatch problem in competitive environment

    NASA Astrophysics Data System (ADS)

    Vo, Dieu Ngoc; Ongsakul, Weerakorn; Nguyen, Khai Phuc

    2012-11-01

    This paper proposes an augmented Lagrange Hopfield network (ALHN) for solving economic dispatch (ED) problem in the competitive environment. The proposed ALHN is a continuous Hopfield network with its energy function based on augmented Lagrange function for efficiently dealing with constrained optimization problems. The ALHN method can overcome the drawbacks of the conventional Hopfield network such as local optimum, long computational time, and linear constraints. The proposed method is used for solving the ED problem with two revenue models of revenue based on payment for power delivered and payment for reserve allocated. The proposed ALHN has been tested on two systems of 3 units and 10 units for the two considered revenue models. The obtained results from the proposed methods are compared to those from differential evolution (DE) and particle swarm optimization (PSO) methods. The result comparison has indicated that the proposed method is very efficient for solving the problem. Therefore, the proposed ALHN could be a favorable tool for ED problem in the competitive environment.

  1. An improved approach to infer protein-protein interaction based on a hierarchical vector space model.

    PubMed

    Zhang, Jiongmin; Jia, Ke; Jia, Jinmeng; Qian, Ying

    2018-04-27

    Comparing and classifying functions of gene products are important in today's biomedical research. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most widely used indicators for protein interaction. Among the various approaches proposed, those based on the vector space model are relatively simple, but their effectiveness is far from satisfying. We propose a Hierarchical Vector Space Model (HVSM) for computing semantic similarity between different genes or their products, which enhances the basic vector space model by introducing the relation between GO terms. Besides the directly annotated terms, HVSM also takes their ancestors and descendants related by "is_a" and "part_of" relations into account. Moreover, HVSM introduces the concept of a Certainty Factor to calibrate the semantic similarity based on the number of terms annotated to genes. To assess the performance of our method, we applied HVSM to Homo sapiens and Saccharomyces cerevisiae protein-protein interaction datasets. Compared with TCSS, Resnik, and other classic similarity measures, HVSM achieved significant improvement for distinguishing positive from negative protein interactions. We also tested its correlation with sequence, EC, and Pfam similarity using online tool CESSM. HVSM showed an improvement of up to 4% compared to TCSS, 8% compared to IntelliGO, 12% compared to basic VSM, 6% compared to Resnik, 8% compared to Lin, 11% compared to Jiang, 8% compared to Schlicker, and 11% compared to SimGIC using AUC scores. CESSM test showed HVSM was comparable to SimGIC, and superior to all other similarity measures in CESSM as well as TCSS. Supplementary information and the software are available at https://github.com/kejia1215/HVSM .

  2. A measurement fusion method for nonlinear system identification using a cooperative learning algorithm.

    PubMed

    Xia, Youshen; Kamel, Mohamed S

    2007-06-01

    Identification of a general nonlinear noisy system viewed as an estimation of a predictor function is studied in this article. A measurement fusion method for the predictor function estimate is proposed. In the proposed scheme, observed data are first fused by using an optimal fusion technique, and then the optimal fused data are incorporated in a nonlinear function estimator based on a robust least squares support vector machine (LS-SVM). A cooperative learning algorithm is proposed to implement the proposed measurement fusion method. Compared with related identification methods, the proposed method can minimize both the approximation error and the noise error. The performance analysis shows that the proposed optimal measurement fusion function estimate has a smaller mean square error than the LS-SVM function estimate. Moreover, the proposed cooperative learning algorithm can converge globally to the optimal measurement fusion function estimate. Finally, the proposed measurement fusion method is applied to ARMA signal and spatial temporal signal modeling. Experimental results show that the proposed measurement fusion method can provide a more accurate model.

  3. Modeling urban air pollution with optimized hierarchical fuzzy inference system.

    PubMed

    Tashayo, Behnam; Alimohammadi, Abbas

    2016-10-01

    Environmental exposure assessments (EEA) and epidemiological studies require urban air pollution models with appropriate spatial and temporal resolutions. Uncertain available data and inflexible models can limit air pollution modeling techniques, particularly in under developing countries. This paper develops a hierarchical fuzzy inference system (HFIS) to model air pollution under different land use, transportation, and meteorological conditions. To improve performance, the system treats the issue as a large-scale and high-dimensional problem and develops the proposed model using a three-step approach. In the first step, a geospatial information system (GIS) and probabilistic methods are used to preprocess the data. In the second step, a hierarchical structure is generated based on the problem. In the third step, the accuracy and complexity of the model are simultaneously optimized with a multiple objective particle swarm optimization (MOPSO) algorithm. We examine the capabilities of the proposed model for predicting daily and annual mean PM2.5 and NO2 and compare the accuracy of the results with representative models from existing literature. The benefits provided by the model features, including probabilistic preprocessing, multi-objective optimization, and hierarchical structure, are precisely evaluated by comparing five different consecutive models in terms of accuracy and complexity criteria. Fivefold cross validation is used to assess the performance of the generated models. The respective average RMSEs and coefficients of determination (R (2)) for the test datasets using proposed model are as follows: daily PM2.5 = (8.13, 0.78), annual mean PM2.5 = (4.96, 0.80), daily NO2 = (5.63, 0.79), and annual mean NO2 = (2.89, 0.83). The obtained results demonstrate that the developed hierarchical fuzzy inference system can be utilized for modeling air pollution in EEA and epidemiological studies.

  4. Density-dependent microbial turnover improves soil carbon model predictions of long-term litter manipulations

    NASA Astrophysics Data System (ADS)

    Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret

    2017-04-01

    Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.

  5. A modeling of dynamic storage assignment for order picking in beverage warehousing with Drive-in Rack system

    NASA Astrophysics Data System (ADS)

    Hadi, M. Z.; Djatna, T.; Sugiarto

    2018-04-01

    This paper develops a dynamic storage assignment model to solve storage assignment problem (SAP) for beverages order picking in a drive-in rack warehousing system to determine the appropriate storage location and space for each beverage products dynamically so that the performance of the system can be improved. This study constructs a graph model to represent drive-in rack storage position then combine association rules mining, class-based storage policies and an arrangement rule algorithm to determine an appropriate storage location and arrangement of the product according to dynamic orders from customers. The performance of the proposed model is measured as rule adjacency accuracy, travel distance (for picking process) and probability a product become expiry using Last Come First Serve (LCFS) queue approach. Finally, the proposed model is implemented through computer simulation and compare the performance for different storage assignment methods as well. The result indicates that the proposed model outperforms other storage assignment methods.

  6. Probabilistic modeling of bifurcations in single-cell gene expression data using a Bayesian mixture of factor analyzers.

    PubMed

    Campbell, Kieran R; Yau, Christopher

    2017-03-15

    Modeling bifurcations in single-cell transcriptomics data has become an increasingly popular field of research. Several methods have been proposed to infer bifurcation structure from such data, but all rely on heuristic non-probabilistic inference. Here we propose the first generative, fully probabilistic model for such inference based on a Bayesian hierarchical mixture of factor analyzers. Our model exhibits competitive performance on large datasets despite implementing full Markov-Chain Monte Carlo sampling, and its unique hierarchical prior structure enables automatic determination of genes driving the bifurcation process. We additionally propose an Empirical-Bayes like extension that deals with the high levels of zero-inflation in single-cell RNA-seq data and quantify when such models are useful. We apply or model to both real and simulated single-cell gene expression data and compare the results to existing pseudotime methods. Finally, we discuss both the merits and weaknesses of such a unified, probabilistic approach in the context practical bioinformatics analyses.

  7. Mathematical model and metaheuristics for simultaneous balancing and sequencing of a robotic mixed-model assembly line

    NASA Astrophysics Data System (ADS)

    Li, Zixiang; Janardhanan, Mukund Nilakantan; Tang, Qiuhua; Nielsen, Peter

    2018-05-01

    This article presents the first method to simultaneously balance and sequence robotic mixed-model assembly lines (RMALB/S), which involves three sub-problems: task assignment, model sequencing and robot allocation. A new mixed-integer programming model is developed to minimize makespan and, using CPLEX solver, small-size problems are solved for optimality. Two metaheuristics, the restarted simulated annealing algorithm and co-evolutionary algorithm, are developed and improved to address this NP-hard problem. The restarted simulated annealing method replaces the current temperature with a new temperature to restart the search process. The co-evolutionary method uses a restart mechanism to generate a new population by modifying several vectors simultaneously. The proposed algorithms are tested on a set of benchmark problems and compared with five other high-performing metaheuristics. The proposed algorithms outperform their original editions and the benchmarked methods. The proposed algorithms are able to solve the balancing and sequencing problem of a robotic mixed-model assembly line effectively and efficiently.

  8. Hot news recommendation system from heterogeneous websites based on bayesian model.

    PubMed

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results.

  9. Hot News Recommendation System from Heterogeneous Websites Based on Bayesian Model

    PubMed Central

    Xia, Zhengyou; Xu, Shengwu; Liu, Ningzhong; Zhao, Zhengkang

    2014-01-01

    The most current news recommendations are suitable for news which comes from a single news website, not for news from different heterogeneous news websites. Previous researches about news recommender systems based on different strategies have been proposed to provide news personalization services for online news readers. However, little research work has been reported on utilizing hundreds of heterogeneous news websites to provide top hot news services for group customers (e.g., government staffs). In this paper, we propose a hot news recommendation model based on Bayesian model, which is from hundreds of different news websites. In the model, we determine whether the news is hot news by calculating the joint probability of the news. We evaluate and compare our proposed recommendation model with the results of human experts on the real data sets. Experimental results demonstrate the reliability and effectiveness of our method. We also implement this model in hot news recommendation system of Hangzhou city government in year 2013, which achieves very good results. PMID:25093207

  10. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    PubMed

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  11. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model.

    PubMed

    Damij, Nadja; Boškoski, Pavle; Bohanec, Marko; Mileva Boshkoska, Biljana

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies' business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and "what-if" scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results.

  12. On performance of parametric and distribution-free models for zero-inflated and over-dispersed count responses.

    PubMed

    Tang, Wan; Lu, Naiji; Chen, Tian; Wang, Wenjuan; Gunzler, Douglas David; Han, Yu; Tu, Xin M

    2015-10-30

    Zero-inflated Poisson (ZIP) and negative binomial (ZINB) models are widely used to model zero-inflated count responses. These models extend the Poisson and negative binomial (NB) to address excessive zeros in the count response. By adding a degenerate distribution centered at 0 and interpreting it as describing a non-risk group in the population, the ZIP (ZINB) models a two-component population mixture. As in applications of Poisson and NB, the key difference between ZIP and ZINB is the allowance for overdispersion by the ZINB in its NB component in modeling the count response for the at-risk group. Overdispersion arising in practice too often does not follow the NB, and applications of ZINB to such data yield invalid inference. If sources of overdispersion are known, other parametric models may be used to directly model the overdispersion. Such models too are subject to assumed distributions. Further, this approach may not be applicable if information about the sources of overdispersion is unavailable. In this paper, we propose a distribution-free alternative and compare its performance with these popular parametric models as well as a moment-based approach proposed by Yu et al. [Statistics in Medicine 2013; 32: 2390-2405]. Like the generalized estimating equations, the proposed approach requires no elaborate distribution assumptions. Compared with the approach of Yu et al., it is more robust to overdispersed zero-inflated responses. We illustrate our approach with both simulated and real study data. Copyright © 2015 John Wiley & Sons, Ltd.

  13. A copula-multifractal volatility hedging model for CSI 300 index futures

    NASA Astrophysics Data System (ADS)

    Wei, Yu; Wang, Yudong; Huang, Dengshi

    2011-11-01

    In this paper, we propose a new hedging model combining the newly introduced multifractal volatility (MFV) model and the dynamic copula functions. Using high-frequency intraday quotes of the spot Shanghai Stock Exchange Composite Index (SSEC), spot China Securities Index 300 (CSI 300), and CSI 300 index futures, we compare the direct and cross hedging effectiveness of the copula-MFV model with several popular copula-GARCH models. The main empirical results show that the proposed copula-MFV model obtains better hedging effectiveness than the copula-GARCH-type models in general. Furthermore, the hedge operating strategy based MFV hedging model involves fewer transaction costs than those based on the GARCH-type models. The finding of this paper indicates that multifractal analysis may offer a new way of quantitative hedging model design using financial futures.

  14. Multi-modal gesture recognition using integrated model of motion, audio and video

    NASA Astrophysics Data System (ADS)

    Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko

    2015-07-01

    Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.

  15. Efficient processing of multiple nested event pattern queries over multi-dimensional event streams based on a triaxial hierarchical model.

    PubMed

    Xiao, Fuyuan; Aritsugi, Masayoshi; Wang, Qing; Zhang, Rong

    2016-09-01

    For efficient and sophisticated analysis of complex event patterns that appear in streams of big data from health care information systems and support for decision-making, a triaxial hierarchical model is proposed in this paper. Our triaxial hierarchical model is developed by focusing on hierarchies among nested event pattern queries with an event concept hierarchy, thereby allowing us to identify the relationships among the expressions and sub-expressions of the queries extensively. We devise a cost-based heuristic by means of the triaxial hierarchical model to find an optimised query execution plan in terms of the costs of both the operators and the communications between them. According to the triaxial hierarchical model, we can also calculate how to reuse the results of the common sub-expressions in multiple queries. By integrating the optimised query execution plan with the reuse schemes, a multi-query optimisation strategy is developed to accomplish efficient processing of multiple nested event pattern queries. We present empirical studies in which the performance of multi-query optimisation strategy was examined under various stream input rates and workloads. Specifically, the workloads of pattern queries can be used for supporting monitoring patients' conditions. On the other hand, experiments with varying input rates of streams can correspond to changes of the numbers of patients that a system should manage, whereas burst input rates can correspond to changes of rushes of patients to be taken care of. The experimental results have shown that, in Workload 1, our proposal can improve about 4 and 2 times throughput comparing with the relative works, respectively; in Workload 2, our proposal can improve about 3 and 2 times throughput comparing with the relative works, respectively; in Workload 3, our proposal can improve about 6 times throughput comparing with the relative work. The experimental results demonstrated that our proposal was able to process complex queries efficiently which can support health information systems and further decision-making. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. A Fuzzy Goal Programming for a Multi-Depot Distribution Problem

    NASA Astrophysics Data System (ADS)

    Nunkaew, Wuttinan; Phruksaphanrat, Busaba

    2010-10-01

    A fuzzy goal programming model for solving a Multi-Depot Distribution Problem (MDDP) is proposed in this research. This effective proposed model is applied for solving in the first step of Assignment First-Routing Second (AFRS) approach. Practically, a basic transportation model is firstly chosen to solve this kind of problem in the assignment step. After that the Vehicle Routing Problem (VRP) model is used to compute the delivery cost in the routing step. However, in the basic transportation model, only depot to customer relationship is concerned. In addition, the consideration of customer to customer relationship should also be considered since this relationship exists in the routing step. Both considerations of relationships are solved using Preemptive Fuzzy Goal Programming (P-FGP). The first fuzzy goal is set by a total transportation cost and the second fuzzy goal is set by a satisfactory level of the overall independence value. A case study is used for describing the effectiveness of the proposed model. Results from the proposed model are compared with the basic transportation model that has previously been used in this company. The proposed model can reduce the actual delivery cost in the routing step owing to the better result in the assignment step. Defining fuzzy goals by membership functions are more realistic than crisps. Furthermore, flexibility to adjust goals and an acceptable satisfactory level for decision maker can also be increased and the optimal solution can be obtained.

  17. ACUTE METHANOL TOXICITY IN MINIPIGS

    EPA Science Inventory

    The pig hos been proposed as a potential animal model for methanol-induced neuro-ocular toxicosis in humans because of its reported low liver tetrahydro folate levels and therefore, slower formate metabolism as compared to humans. o determine the validity of the animal model, min...

  18. Functional form diagnostics for Cox's proportional hazards model.

    PubMed

    León, Larry F; Tsai, Chih-Ling

    2004-03-01

    We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.

  19. Robust and efficient estimation with weighted composite quantile regression

    NASA Astrophysics Data System (ADS)

    Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng

    2016-09-01

    In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.

  20. Pigeon interaction mode switch-based UAV distributed flocking control under obstacle environments.

    PubMed

    Qiu, Huaxin; Duan, Haibin

    2017-11-01

    Unmanned aerial vehicle (UAV) flocking control is a serious and challenging problem due to local interactions and changing environments. In this paper, a pigeon flocking model and a pigeon coordinated obstacle-avoiding model are proposed based on a behavior that pigeon flocks will switch between hierarchical and egalitarian interaction mode at different flight phases. Owning to the similarity between bird flocks and UAV swarms in essence, a distributed flocking control algorithm based on the proposed pigeon flocking and coordinated obstacle-avoiding models is designed to coordinate a heterogeneous UAV swarm to fly though obstacle environments with few informed individuals. The comparative simulation results are elaborated to show the feasibility, validity and superiority of our proposed algorithm. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Culture and Developmental Trajectories: A Discussion on Contemporary Theoretical Models

    ERIC Educational Resources Information Center

    de Carvalho, Rafael Vera Cruz; Seidl-de-Moura, Maria Lucia; Martins, Gabriela Dal Forno; Vieira, Mauro Luís

    2014-01-01

    This paper aims to describe, compare and discuss the theoretical models proposed by Patricia Greenfield, Çigdem Kagitçibasi and Heidi Keller. Their models have the common goal of understanding the developmental trajectories of self based on dimensions of autonomy and relatedness that are structured according to specific cultural and environmental…

  2. Real-time 3-D space numerical shake prediction for earthquake early warning

    NASA Astrophysics Data System (ADS)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  3. A Bayesian prediction model between a biomarker and the clinical endpoint for dichotomous variables.

    PubMed

    Jiang, Zhiwei; Song, Yang; Shou, Qiong; Xia, Jielai; Wang, William

    2014-12-20

    Early biomarkers are helpful for predicting clinical endpoints and for evaluating efficacy in clinical trials even if the biomarker cannot replace clinical outcome as a surrogate. The building and evaluation of an association model between biomarkers and clinical outcomes are two equally important concerns regarding the prediction of clinical outcome. This paper is to address both issues in a Bayesian framework. A Bayesian meta-analytic approach is proposed to build a prediction model between the biomarker and clinical endpoint for dichotomous variables. Compared with other Bayesian methods, the proposed model only requires trial-level summary data of historical trials in model building. By using extensive simulations, we evaluate the link function and the application condition of the proposed Bayesian model under scenario (i) equal positive predictive value (PPV) and negative predictive value (NPV) and (ii) higher NPV and lower PPV. In the simulations, the patient-level data is generated to evaluate the meta-analytic model. PPV and NPV are employed to describe the patient-level relationship between the biomarker and the clinical outcome. The minimum number of historical trials to be included in building the model is also considered. It is seen from the simulations that the logit link function performs better than the odds and cloglog functions under both scenarios. PPV/NPV ≥0.5 for equal PPV and NPV, and PPV + NPV ≥1 for higher NPV and lower PPV are proposed in order to predict clinical outcome accurately and precisely when the proposed model is considered. Twenty historical trials are required to be included in model building when PPV and NPV are equal. For unequal PPV and NPV, the minimum number of historical trials for model building is proposed to be five. A hypothetical example shows an application of the proposed model in global drug development. The proposed Bayesian model is able to predict well the clinical endpoint from the observed biomarker data for dichotomous variables as long as the conditions are satisfied. It could be applied in drug development. But the practical problems in applications have to be studied in further research.

  4. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array.

    PubMed

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-12-08

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts-MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing.

  5. Labeled RFS-Based Track-Before-Detect for Multiple Maneuvering Targets in the Infrared Focal Plane Array

    PubMed Central

    Li, Miao; Li, Jun; Zhou, Yiyu

    2015-01-01

    The problem of jointly detecting and tracking multiple targets from the raw observations of an infrared focal plane array is a challenging task, especially for the case with uncertain target dynamics. In this paper a multi-model labeled multi-Bernoulli (MM-LMB) track-before-detect method is proposed within the labeled random finite sets (RFS) framework. The proposed track-before-detect method consists of two parts—MM-LMB filter and MM-LMB smoother. For the MM-LMB filter, original LMB filter is applied to track-before-detect based on target and measurement models, and is integrated with the interacting multiple models (IMM) approach to accommodate the uncertainty of target dynamics. For the MM-LMB smoother, taking advantage of the track labels and posterior model transition probability, the single-model single-target smoother is extended to a multi-model multi-target smoother. A Sequential Monte Carlo approach is also presented to implement the proposed method. Simulation results show the proposed method can effectively achieve tracking continuity for multiple maneuvering targets. In addition, compared with the forward filtering alone, our method is more robust due to its combination of forward filtering and backward smoothing. PMID:26670234

  6. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles.

    PubMed

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words.

  7. Fundamental incorporation of the density change during melting of a confined phase change material

    NASA Astrophysics Data System (ADS)

    Hernández, Ernesto M.; Otero, José A.

    2018-02-01

    The modeling of thermal diffusion processes taking place in a phase change material presents a challenge when the dynamics of the phase transition is coupled to the mechanical properties of the container. Thermo-mechanical models have been developed by several authors, however, it will be shown that these models only explain the phase transition dynamics at low pressures when the density of each phase experiences negligible changes. In our proposal, a new energy-mass balance equation at the interface is derived and found to be a consequence of mass conservation. The density change experienced in each phase is predicted by the proposed formulation of the problem. Numerical and semi-analytical solutions to the proposed model are presented for an example on a high temperature phase change material. The solutions to the models presented by other authors are observed to be well-behaved close to the isobaric limit. However, compared to the results obtained from our model, the change in the fusion temperature, latent heat, and absolute pressure is found to be greatly overestimated by other proposals when the phase transition is studied close to the isochoric regime.

  8. A Hybrid Neural Network Model for Sales Forecasting Based on ARIMA and Search Popularity of Article Titles

    PubMed Central

    Omar, Hani; Hoang, Van Hai; Liu, Duen-Ren

    2016-01-01

    Enhancing sales and operations planning through forecasting analysis and business intelligence is demanded in many industries and enterprises. Publishing industries usually pick attractive titles and headlines for their stories to increase sales, since popular article titles and headlines can attract readers to buy magazines. In this paper, information retrieval techniques are adopted to extract words from article titles. The popularity measures of article titles are then analyzed by using the search indexes obtained from Google search engine. Backpropagation Neural Networks (BPNNs) have successfully been used to develop prediction models for sales forecasting. In this study, we propose a novel hybrid neural network model for sales forecasting based on the prediction result of time series forecasting and the popularity of article titles. The proposed model uses the historical sales data, popularity of article titles, and the prediction result of a time series, Autoregressive Integrated Moving Average (ARIMA) forecasting method to learn a BPNN-based forecasting model. Our proposed forecasting model is experimentally evaluated by comparing with conventional sales prediction techniques. The experimental result shows that our proposed forecasting method outperforms conventional techniques which do not consider the popularity of title words. PMID:27313605

  9. Parameter estimation using weighted total least squares in the two-compartment exchange model.

    PubMed

    Garpebring, Anders; Löfstedt, Tommy

    2018-01-01

    The linear least squares (LLS) estimator provides a fast approach to parameter estimation in the linearized two-compartment exchange model. However, the LLS method may introduce a bias through correlated noise in the system matrix of the model. The purpose of this work is to present a new estimator for the linearized two-compartment exchange model that takes this noise into account. To account for the noise in the system matrix, we developed an estimator based on the weighted total least squares (WTLS) method. Using simulations, the proposed WTLS estimator was compared, in terms of accuracy and precision, to an LLS estimator and a nonlinear least squares (NLLS) estimator. The WTLS method improved the accuracy compared to the LLS method to levels comparable to the NLLS method. This improvement was at the expense of increased computational time; however, the WTLS was still faster than the NLLS method. At high signal-to-noise ratio all methods provided similar precisions while inconclusive results were observed at low signal-to-noise ratio. The proposed method provides improvements in accuracy compared to the LLS method, however, at an increased computational cost. Magn Reson Med 79:561-567, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Support vector machine in crash prediction at the level of traffic analysis zones: Assessing the spatial proximity effects.

    PubMed

    Dong, Ni; Huang, Helai; Zheng, Liang

    2015-09-01

    In zone-level crash prediction, accounting for spatial dependence has become an extensively studied topic. This study proposes Support Vector Machine (SVM) model to address complex, large and multi-dimensional spatial data in crash prediction. Correlation-based Feature Selector (CFS) was applied to evaluate candidate factors possibly related to zonal crash frequency in handling high-dimension spatial data. To demonstrate the proposed approaches and to compare them with the Bayesian spatial model with conditional autoregressive prior (i.e., CAR), a dataset in Hillsborough county of Florida was employed. The results showed that SVM models accounting for spatial proximity outperform the non-spatial model in terms of model fitting and predictive performance, which indicates the reasonableness of considering cross-zonal spatial correlations. The best model predictive capability, relatively, is associated with the model considering proximity of the centroid distance by choosing the RBF kernel and setting the 10% of the whole dataset as the testing data, which further exhibits SVM models' capacity for addressing comparatively complex spatial data in regional crash prediction modeling. Moreover, SVM models exhibit the better goodness-of-fit compared with CAR models when utilizing the whole dataset as the samples. A sensitivity analysis of the centroid-distance-based spatial SVM models was conducted to capture the impacts of explanatory variables on the mean predicted probabilities for crash occurrence. While the results conform to the coefficient estimation in the CAR models, which supports the employment of the SVM model as an alternative in regional safety modeling. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Modeling of near-wall turbulence

    NASA Technical Reports Server (NTRS)

    Shih, T. H.; Mansour, N. N.

    1990-01-01

    An improved k-epsilon model and a second order closure model is presented for low Reynolds number turbulence near a wall. For the k-epsilon model, a modified form of the eddy viscosity having correct asymptotic near wall behavior is suggested, and a model for the pressure diffusion term in the turbulent kinetic energy equation is proposed. For the second order closure model, the existing models are modified for the Reynolds stress equations to have proper near wall behavior. A dissipation rate equation for the turbulent kinetic energy is also reformulated. The proposed models satisfy realizability and will not produce unphysical behavior. Fully developed channel flows are used for model testing. The calculations are compared with direct numerical simulations. It is shown that the present models, both the k-epsilon model and the second order closure model, perform well in predicting the behavior of the near wall turbulence. Significant improvements over previous models are obtained.

  12. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  13. Discriminative analysis of early Alzheimer's disease based on two intrinsically anti-correlated networks with resting-state fMRI.

    PubMed

    Wang, Kun; Jiang, Tianzi; Liang, Meng; Wang, Liang; Tian, Lixia; Zhang, Xinqing; Li, Kuncheng; Liu, Zhening

    2006-01-01

    In this work, we proposed a discriminative model of Alzheimer's disease (AD) on the basis of multivariate pattern classification and functional magnetic resonance imaging (fMRI). This model used the correlation/anti-correlation coefficients of two intrinsically anti-correlated networks in resting brains, which have been suggested by two recent studies, as the feature of classification. Pseudo-Fisher Linear Discriminative Analysis (pFLDA) was then performed on the feature space and a linear classifier was generated. Using leave-one-out (LOO) cross validation, our results showed a correct classification rate of 83%. We also compared the proposed model with another one based on the whole brain functional connectivity. Our proposed model outperformed the other one significantly, and this implied that the two intrinsically anti-correlated networks may be a more susceptible part of the whole brain network in the early stage of AD.

  14. Particle swarm optimization algorithm based parameters estimation and control of epileptiform spikes in a neural mass model

    NASA Astrophysics Data System (ADS)

    Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan

    2016-07-01

    This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.

  15. Estimating statistical power for open-enrollment group treatment trials.

    PubMed

    Morgan-Lopez, Antonio A; Saavedra, Lissette M; Hien, Denise A; Fals-Stewart, William

    2011-01-01

    Modeling turnover in group membership has been identified as a key barrier contributing to a disconnect between the manner in which behavioral treatment is conducted (open-enrollment groups) and the designs of substance abuse treatment trials (closed-enrollment groups, individual therapy). Latent class pattern mixture models (LCPMMs) are emerging tools for modeling data from open-enrollment groups with membership turnover in recently proposed treatment trials. The current article illustrates an approach to conducting power analyses for open-enrollment designs based on the Monte Carlo simulation of LCPMM models using parameters derived from published data from a randomized controlled trial comparing Seeking Safety to a Community Care condition for women presenting with comorbid posttraumatic stress disorder and substance use disorders. The example addresses discrepancies between the analysis framework assumed in power analyses of many recently proposed open-enrollment trials and the proposed use of LCPMM for data analysis. Copyright © 2011 Elsevier Inc. All rights reserved.

  16. Bayesian Local Contamination Models for Multivariate Outliers

    PubMed Central

    Page, Garritt L.; Dunson, David B.

    2013-01-01

    In studies where data are generated from multiple locations or sources it is common for there to exist observations that are quite unlike the majority. Motivated by the application of establishing a reference value in an inter-laboratory setting when outlying labs are present, we propose a local contamination model that is able to accommodate unusual multivariate realizations in a flexible way. The proposed method models the process level of a hierarchical model using a mixture with a parametric component and a possibly nonparametric contamination. Much of the flexibility in the methodology is achieved by allowing varying random subsets of the elements in the lab-specific mean vectors to be allocated to the contamination component. Computational methods are developed and the methodology is compared to three other possible approaches using a simulation study. We apply the proposed method to a NIST/NOAA sponsored inter-laboratory study which motivated the methodological development. PMID:24363465

  17. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  18. Proposals for enhanced health risk assessment and stratification in an integrated care scenario.

    PubMed

    Dueñas-Espín, Ivan; Vela, Emili; Pauws, Steffen; Bescos, Cristina; Cano, Isaac; Cleries, Montserrat; Contel, Joan Carles; de Manuel Keenoy, Esteban; Garcia-Aymerich, Judith; Gomez-Cabrero, David; Kaye, Rachelle; Lahr, Maarten M H; Lluch-Ariet, Magí; Moharra, Montserrat; Monterde, David; Mora, Joana; Nalin, Marco; Pavlickova, Andrea; Piera, Jordi; Ponce, Sara; Santaeugenia, Sebastià; Schonenberg, Helen; Störk, Stefan; Tegner, Jesper; Velickovski, Filip; Westerteicher, Christoph; Roca, Josep

    2016-04-15

    Population-based health risk assessment and stratification are considered highly relevant for large-scale implementation of integrated care by facilitating services design and case identification. The principal objective of the study was to analyse five health-risk assessment strategies and health indicators used in the five regions participating in the Advancing Care Coordination and Telehealth Deployment (ACT) programme (http://www.act-programme.eu). The second purpose was to elaborate on strategies toward enhanced health risk predictive modelling in the clinical scenario. The five ACT regions: Scotland (UK), Basque Country (ES), Catalonia (ES), Lombardy (I) and Groningen (NL). Responsible teams for regional data management in the five ACT regions. We characterised and compared risk assessment strategies among ACT regions by analysing operational health risk predictive modelling tools for population-based stratification, as well as available health indicators at regional level. The analysis of the risk assessment tool deployed in Catalonia in 2015 (GMAs, Adjusted Morbidity Groups) was used as a basis to propose how population-based analytics could contribute to clinical risk prediction. There was consensus on the need for a population health approach to generate health risk predictive modelling. However, this strategy was fully in place only in two ACT regions: Basque Country and Catalonia. We found marked differences among regions in health risk predictive modelling tools and health indicators, and identified key factors constraining their comparability. The research proposes means to overcome current limitations and the use of population-based health risk prediction for enhanced clinical risk assessment. The results indicate the need for further efforts to improve both comparability and flexibility of current population-based health risk predictive modelling approaches. Applicability and impact of the proposals for enhanced clinical risk assessment require prospective evaluation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  20. A novel contact model of piezoelectric traveling wave rotary ultrasonic motors with the finite volume method.

    PubMed

    Renteria-Marquez, I A; Renteria-Marquez, A; Tseng, B T L

    2018-06-06

    The operating principle of the piezoelectric traveling wave rotary ultrasonic motor is based on two energy conversion processes: the generation of the stator traveling wave and the rectification of the stator movement through the stator-rotor contact mechanism. This paper presents a methodology to model in detail the stator-rotor contact interface of these motors. A contact algorithm that couples a model of the stator which is discretized with the finite volume method and an analytical model of the rotor is presented. The outputs of the proposed model are the normal and tangential force distribution produced at the stator-rotor contact interface, contact length, height and shape of the stator traveling wave and rotor speed. The torque-speed characteristic of the USR60 is calculated with the proposed model, and the results of the model are compared versus the real torque-speed of the motor. A good agreement between the proposed model results and the torque-speed characteristic of the USR60 was observed. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data

    PubMed Central

    Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence

    2013-01-01

    Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011

  2. Modeling the magnetoelectric effect in laminated composites using Hamilton’s principle

    NASA Astrophysics Data System (ADS)

    Zhang, Shengyao; Zhang, Ru; Jiang, Jiqing

    2018-01-01

    Mathematical modeling of the magnetoelectric (ME) effect has been established for some rectangular and disk laminate structures. However, these methods are difficult in other cases, particularly for complex structures. In this work, a new method for the analysis of the ME effect is proposed using a generalized Hamilton’s principle, which is conveniently applicable to various laminate structures. As an example, the performance of the rectangular ME laminated composite is analyzed and the equivalent circuit model for the laminate is obtained directly from the analysis. The experimental data is also obtained to compare with the theoretical calculations and to validate the new method. Compared with Dong’s method, the new method is more accurate and convenient. In particular, the equivalent circuit for the rectangular laminated composite can be obtained more easily by the proposed method as it does not require the complex treatment used in Dong’s method.

  3. Transverse limited phase space model with Glauber geometry for high-energy nucleus-nucleus collisions

    NASA Astrophysics Data System (ADS)

    Huang, Ding Wei; Yen, Edward

    1989-08-01

    We propose a detailed model, combining the concepts from a partition temperature model and wounded nucleon model, to describe high-energy nucleus-nucleus collisions. One partition temperature is associated with collisions at a fixed wounded nucleon number. The (pseudo-) rapidity distributions are calculated and compared with experimental data. Predictions at higher energy are also presented.

  4. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  5. Development and Application of a Cohesive Sediment Transport Model in Coastal Louisiana

    NASA Astrophysics Data System (ADS)

    Sorourian, S.; Nistor, I.

    2017-12-01

    The Louisiana coast has suffered from rapid land loss due to the combined effects of increasing the rate of eustatic sea level rise, insufficient riverine sediment input and subsidence. The sediment in this region is dominated by cohesive sediments (up to 80% of clay). This study presents a new model for calculating suspended sediment concentration (SSC) of cohesive sediments. Several new concepts are incorporated into the proposed model, which is capable of estimating the spatial and temporal variation in the concentration of cohesive sediment. First, the model incorporates the effect of electrochemical forces between cohesive sediment particles. Second, the wave friction factor is expressed in terms of the median particle size diameter in order to enhance the accuracy of the estimation of bed shear stress. Third, the erosion rate of cohesive sediments is also expressed in time-dependent form. Simulated SSC profiles are compared with field data collected from Vermilion Bay, Louisiana. The results of the proposed model agree well with the experimental data, as soon as steady state condition is achieved. The results of the new numerical models provide a better estimation of the suspended sediment concentration profile compared to the initial model developed by Mehta and Li, 2003. Among the proposed developments, the formulation of a time-dependent erosion rate shows the most accurate results. Coupling of present model with the Finite-Volume, primitive equation Community Ocean Model (FVCOM) would shed light on the fate of fine-grained sediments in order to increase overall retention and restoration of the Louisiana coastal plain.

  6. Investigating the Mobility of Trilayer Graphene Nanoribbon in Nanoscale FETs

    NASA Astrophysics Data System (ADS)

    Rahmani, Meisam; Ghafoori Fard, Hassan; Ahmadi, Mohammad Taghi; Rahbarpour, Saeideh; Habibiyan, Hamidreza; Varmazyari, Vali; Rahmani, Komeil

    2017-10-01

    The aim of the present paper is to investigate the scaling behaviors of charge carrier mobility as one of the most remarkable characteristics for modeling of nanoscale field-effect transistors (FETs). Many research groups in academia and industry are contributing to the model development and experimental identification of multi-layer graphene FET-based devices. The approach in the present work is to provide an analytical model for carrier mobility of tri-layer graphene nanoribbon (TGN) FET. In order to do so, one starts by identifying the analytical modeling of TGN carrier velocity and ballistic conductance. At the end, a model of charge carrier mobility with numerical solution is analytically derived for TGN FET, in which the carrier concentration, temperature and channel length characteristics dependence are highlighted. Moreover, variation of band gap and gate voltage during the proposed device operation and its effect on carrier mobility is investigated. To evaluate the nanoscale FET performance, the carrier mobility model is also adopted to obtain the I-V characteristics of the device. In order to verify the accuracy of the proposed analytical model for TGN mobility, it is compared to the existing experimental data, and a satisfactory agreement is reported for analogous ambient conditions. Moreover, the proposed model is compared with the published data of single-layer graphene and bi-layer graphene, in which the obtained results demonstrate significant insights into the importance of charge carrier mobility impact in high-performance TGN FET. The work presented here is one step towards an applicable model for real-world nanoscale FETs.

  7. Analyzing gene expression time-courses based on multi-resolution shape mixture model.

    PubMed

    Li, Ying; He, Ye; Zhang, Yu

    2016-11-01

    Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson's arms race model.

    PubMed

    Riaz, Faisal; Niazi, Muaz A

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson's arms race model has also been presented. The performance of the proposed social agent has been validated at two levels-firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme.

  9. Towards social autonomous vehicles: Efficient collision avoidance scheme using Richardson’s arms race model

    PubMed Central

    Niazi, Muaz A.

    2017-01-01

    This paper presents the concept of a social autonomous agent to conceptualize such Autonomous Vehicles (AVs), which interacts with other AVs using social manners similar to human behavior. The presented AVs also have the capability of predicting intentions, i.e. mentalizing and copying the actions of each other, i.e. mirroring. Exploratory Agent Based Modeling (EABM) level of the Cognitive Agent Based Computing (CABC) framework has been utilized to design the proposed social agent. Furthermore, to emulate the functionality of mentalizing and mirroring modules of proposed social agent, a tailored mathematical model of the Richardson’s arms race model has also been presented. The performance of the proposed social agent has been validated at two levels–firstly it has been simulated using NetLogo, a standard agent-based modeling tool and also, at a practical level using a prototype AV. The simulation results have confirmed that the proposed social agent-based collision avoidance strategy is 78.52% more efficient than Random walk based collision avoidance strategy in congested flock-like topologies. Whereas practical results have confirmed that the proposed scheme can avoid rear end and lateral collisions with the efficiency of 99.876% as compared with the IEEE 802.11n-based existing state of the art mirroring neuron-based collision avoidance scheme. PMID:29040294

  10. An Evaluation of Output Quality of Machine Translation (Padideh Software vs. Google Translate)

    ERIC Educational Resources Information Center

    Azer, Haniyeh Sadeghi; Aghayi, Mohammad Bagher

    2015-01-01

    This study aims to evaluate the translation quality of two machine translation systems in translating six different text-types, from English to Persian. The evaluation was based on criteria proposed by Van Slype (1979). The proposed model for evaluation is a black-box type, comparative and adequacy-oriented evaluation. To conduct the evaluation, a…

  11. Recommendations on presenting LHC searches for missing transverse energy signals using simplified s-channel models of dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio

    2016-03-14

    This document summarises the proposal of the LHC Dark Matter Working Group on how to present LHC results on s-channel simplified dark matter models and to compare them to direct (indirect) detection experiments.

  12. Compartmental and Data-Based Modeling of Cerebral Hemodynamics: Linear Analysis.

    PubMed

    Henley, B C; Shin, D C; Zhang, R; Marmarelis, V Z

    Compartmental and data-based modeling of cerebral hemodynamics are alternative approaches that utilize distinct model forms and have been employed in the quantitative study of cerebral hemodynamics. This paper examines the relation between a compartmental equivalent-circuit and a data-based input-output model of dynamic cerebral autoregulation (DCA) and CO2-vasomotor reactivity (DVR). The compartmental model is constructed as an equivalent-circuit utilizing putative first principles and previously proposed hypothesis-based models. The linear input-output dynamics of this compartmental model are compared with data-based estimates of the DCA-DVR process. This comparative study indicates that there are some qualitative similarities between the two-input compartmental model and experimental results.

  13. Flow and Transport of Fines in Dams and Embankments

    NASA Astrophysics Data System (ADS)

    Glascoe, L. G.; Ezzedine, S. M.; Kanarska, Y.; Lomov, I.; Antoun, T.; Woodson, S. C.; Hall, R. L.; Smith, J.

    2013-12-01

    Understanding the flow of fines in porous media and fractured media is significant for industrial, environmental, geotechnical and petroleum technologies to name a few. Several models have been proposed to simulate the flow and transport of fines using single or two-phase flow approaches while other models rely on mobile and immobile transport approaches. However, to the authors' best knowledge, all the proposed modeling approaches have not been compared to each other in order to define their limitations and domain of validation. In the present study, several models describing the transport of fines in heterogeneous porous and fractured media will be presented and compared to each other. Furthermore, we will evaluate their performance on the same published experimental sets of published data. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was sponsored by the Department of Homeland Security (DHS), Science and Technology Directorate, Homeland Security Advanced Research Projects Agency (HSARPA).

  14. A prediction model of signal degradation in LMSS for urban areas

    NASA Technical Reports Server (NTRS)

    Matsudo, Takashi; Minamisono, Kenichi; Karasawa, Yoshio; Shiokawa, Takayasu

    1993-01-01

    A prediction model of signal degradation in a Land Mobile Satellite Service (LMSS) for urban areas is proposed. This model treats shadowing effects caused by buildings statistically and can predict a Cumulative Distribution Function (CDF) of signal diffraction losses in urban areas as a function of system parameters such as frequency and elevation angle and environmental parameters such as number of building stories and so on. In order to examine the validity of the model, we compared the percentage of locations where diffraction losses were smaller than 6 dB obtained by the CDF with satellite visibility measured by a radiometer. As a result, it was found that this proposed model is useful for estimating the feasibility of providing LMSS in urban areas.

  15. A Damage Model for the Simulation of Delamination in Advanced Composites under Variable-Mode Loading

    NASA Technical Reports Server (NTRS)

    Turon, A.; Camanho, P. P.; Costa, J.; Davila, C. G.

    2006-01-01

    A thermodynamically consistent damage model is proposed for the simulation of progressive delamination in composite materials under variable-mode ratio. The model is formulated in the context of Damage Mechanics. A novel constitutive equation is developed to model the initiation and propagation of delamination. A delamination initiation criterion is proposed to assure that the formulation can account for changes in the loading mode in a thermodynamically consistent way. The formulation accounts for crack closure effects to avoid interfacial penetration of two adjacent layers after complete decohesion. The model is implemented in a finite element formulation, and the numerical predictions are compared with experimental results obtained in both composite test specimens and structural components.

  16. A Self-Organizing Incremental Spatiotemporal Associative Memory Networks Model for Problems with Hidden State

    PubMed Central

    2016-01-01

    Identifying the hidden state is important for solving problems with hidden state. We prove any deterministic partially observable Markov decision processes (POMDP) can be represented by a minimal, looping hidden state transition model and propose a heuristic state transition model constructing algorithm. A new spatiotemporal associative memory network (STAMN) is proposed to realize the minimal, looping hidden state transition model. STAMN utilizes the neuroactivity decay to realize the short-term memory, connection weights between different nodes to represent long-term memory, presynaptic potentials, and synchronized activation mechanism to complete identifying and recalling simultaneously. Finally, we give the empirical illustrations of the STAMN and compare the performance of the STAMN model with that of other methods. PMID:27891146

  17. Three-dimensional whole-brain perfusion quantification using pseudo-continuous arterial spin labeling MRI at multiple post-labeling delays: accounting for both arterial transit time and impulse response function.

    PubMed

    Qin, Qin; Huang, Alan J; Hua, Jun; Desmond, John E; Stevens, Robert D; van Zijl, Peter C M

    2014-02-01

    Measurement of the cerebral blood flow (CBF) with whole-brain coverage is challenging in terms of both acquisition and quantitative analysis. In order to fit arterial spin labeling-based perfusion kinetic curves, an empirical three-parameter model which characterizes the effective impulse response function (IRF) is introduced, which allows the determination of CBF, the arterial transit time (ATT) and T(1,eff). The accuracy and precision of the proposed model were compared with those of more complicated models with four or five parameters through Monte Carlo simulations. Pseudo-continuous arterial spin labeling images were acquired on a clinical 3-T scanner in 10 normal volunteers using a three-dimensional multi-shot gradient and spin echo scheme at multiple post-labeling delays to sample the kinetic curves. Voxel-wise fitting was performed using the three-parameter model and other models that contain two, four or five unknown parameters. For the two-parameter model, T(1,eff) values close to tissue and blood were assumed separately. Standard statistical analysis was conducted to compare these fitting models in various brain regions. The fitted results indicated that: (i) the estimated CBF values using the two-parameter model show appreciable dependence on the assumed T(1,eff) values; (ii) the proposed three-parameter model achieves the optimal balance between the goodness of fit and model complexity when compared among the models with explicit IRF fitting; (iii) both the two-parameter model using fixed blood T1 values for T(1,eff) and the three-parameter model provide reasonable fitting results. Using the proposed three-parameter model, the estimated CBF (46 ± 14 mL/100 g/min) and ATT (1.4 ± 0.3 s) values averaged from different brain regions are close to the literature reports; the estimated T(1,eff) values (1.9 ± 0.4 s) are higher than the tissue T1 values, possibly reflecting a contribution from the microvascular arterial blood compartment. Copyright © 2013 John Wiley & Sons, Ltd.

  18. A comparison of portfolio selection models via application on ISE 100 index data

    NASA Astrophysics Data System (ADS)

    Altun, Emrah; Tatlidil, Hüseyin

    2013-10-01

    Markowitz Model, a classical approach to portfolio optimization problem, relies on two important assumptions: the expected return is multivariate normally distributed and the investor is risk averter. But this model has not been extensively used in finance. Empirical results show that it is very hard to solve large scale portfolio optimization problems with Mean-Variance (M-V)model. Alternative model, Mean Absolute Deviation (MAD) model which is proposed by Konno and Yamazaki [7] has been used to remove most of difficulties of Markowitz Mean-Variance model. MAD model don't need to assume that the probability of the rates of return is normally distributed and based on Linear Programming. Another alternative portfolio model is Mean-Lower Semi Absolute Deviation (M-LSAD), which is proposed by Speranza [3]. We will compare these models to determine which model gives more appropriate solution to investors.

  19. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    PubMed

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

  20. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  1. Multilayer perceptron neural network-based approach for modeling phycocyanin pigment concentrations: case study from lower Charles River buoy, USA.

    PubMed

    Heddam, Salim

    2016-09-01

    This paper proposes multilayer perceptron neural network (MLPNN) to predict phycocyanin (PC) pigment using water quality variables as predictor. In the proposed model, four water quality variables that are water temperature, dissolved oxygen, pH, and specific conductance were selected as the inputs for the MLPNN model, and the PC as the output. To demonstrate the capability and the usefulness of the MLPNN model, a total of 15,849 data measured at 15-min (15 min) intervals of time are used for the development of the model. The data are collected at the lower Charles River buoy, and available from the US Environmental Protection Agency (USEPA). For comparison purposes, a multiple linear regression (MLR) model that was frequently used for predicting water quality variables in previous studies is also built. The performances of the models are evaluated using a set of widely used statistical indices. The performance of the MLPNN and MLR models is compared with the measured data. The obtained results show that (i) the all proposed MLPNN models are more accurate than the MLR models and (ii) the results obtained are very promising and encouraging for the development of phycocyanin-predictive models.

  2. Crash data modeling with a generalized estimator.

    PubMed

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-08-01

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking

    PubMed Central

    Liu, Hua; Wu, Wen

    2017-01-01

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF). PMID:28608843

  4. Interacting Multiple Model (IMM) Fifth-Degree Spherical Simplex-Radial Cubature Kalman Filter for Maneuvering Target Tracking.

    PubMed

    Liu, Hua; Wu, Wen

    2017-06-13

    For improving the tracking accuracy and model switching speed of maneuvering target tracking in nonlinear systems, a new algorithm named the interacting multiple model fifth-degree spherical simplex-radial cubature Kalman filter (IMM5thSSRCKF) is proposed in this paper. The new algorithm is a combination of the interacting multiple model (IMM) filter and the fifth-degree spherical simplex-radial cubature Kalman filter (5thSSRCKF). The proposed algorithm makes use of Markov process to describe the switching probability among the models, and uses 5thSSRCKF to deal with the state estimation of each model. The 5thSSRCKF is an improved filter algorithm, which utilizes the fifth-degree spherical simplex-radial rule to improve the filtering accuracy. Finally, the tracking performance of the IMM5thSSRCKF is evaluated by simulation in a typical maneuvering target tracking scenario. Simulation results show that the proposed algorithm has better tracking performance and quicker model switching speed when disposing maneuver models compared with the interacting multiple model unscented Kalman filter (IMMUKF), the interacting multiple model cubature Kalman filter (IMMCKF) and the interacting multiple model fifth-degree cubature Kalman filter (IMM5thCKF).

  5. Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.

    PubMed

    Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui

    2017-01-01

    To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.

  6. Modeling of Dual Gate Material Hetero-dielectric Strained PNPN TFET for Improved ON Current

    NASA Astrophysics Data System (ADS)

    Kumari, Tripty; Saha, Priyanka; Dash, Dinesh Kumar; Sarkar, Subir Kumar

    2018-01-01

    The tunnel field effect transistor (TFET) is considered to be a promising alternative device for future low-power VLSI circuits due to its steep subthreshold slope, low leakage current and its efficient performance at low supply voltage. However, the main challenging issue associated with realizing TFET for wide scale applications is its low ON current. To overcome this, a dual gate material with the concept of dielectric engineering has been incorporated into conventional TFET structure to tune the tunneling width at source-channel interface allowing significant flow of carriers. In addition to this, N+ pocket is implanted at source-channel junction of the proposed structure and the effect of strain is added for exploring the performance of the model in nanoscale regime. All these added features upgrade the device characteristics leading to higher ON current, low leakage and low threshold voltage. The present work derives the surface potential, electric field expression and drain current by solving 2D Poisson's equation at different boundary conditions. A comparative analysis of proposed model with conventional TFET has been done to establish the superiority of the proposed structure. All analytical results have been compared with the results obtained in SILVACO ATLAS device simulator to establish the accuracy of the derived analytical model.

  7. Statistical Methods for Generalized Linear Models with Covariates Subject to Detection Limits.

    PubMed

    Bernhardt, Paul W; Wang, Huixia J; Zhang, Daowen

    2015-05-01

    Censored observations are a common occurrence in biomedical data sets. Although a large amount of research has been devoted to estimation and inference for data with censored responses, very little research has focused on proper statistical procedures when predictors are censored. In this paper, we consider statistical methods for dealing with multiple predictors subject to detection limits within the context of generalized linear models. We investigate and adapt several conventional methods and develop a new multiple imputation approach for analyzing data sets with predictors censored due to detection limits. We establish the consistency and asymptotic normality of the proposed multiple imputation estimator and suggest a computationally simple and consistent variance estimator. We also demonstrate that the conditional mean imputation method often leads to inconsistent estimates in generalized linear models, while several other methods are either computationally intensive or lead to parameter estimates that are biased or more variable compared to the proposed multiple imputation estimator. In an extensive simulation study, we assess the bias and variability of different approaches within the context of a logistic regression model and compare variance estimation methods for the proposed multiple imputation estimator. Lastly, we apply several methods to analyze the data set from a recently-conducted GenIMS study.

  8. Fuzzy Logic-Based Guaranteed Lifetime Protocol for Real-Time Wireless Sensor Networks.

    PubMed

    Shah, Babar; Iqbal, Farkhund; Abbas, Ali; Kim, Ki-Il

    2015-08-18

    Few techniques for guaranteeing a network lifetime have been proposed despite its great impact on network management. Moreover, since the existing schemes are mostly dependent on the combination of disparate parameters, they do not provide additional services, such as real-time communications and balanced energy consumption among sensor nodes; thus, the adaptability problems remain unresolved among nodes in wireless sensor networks (WSNs). To solve these problems, we propose a novel fuzzy logic model to provide real-time communication in a guaranteed WSN lifetime. The proposed fuzzy logic controller accepts the input descriptors energy, time and velocity to determine each node's role for the next duration and the next hop relay node for real-time packets. Through the simulation results, we verified that both the guaranteed network's lifetime and real-time delivery are efficiently ensured by the new fuzzy logic model. In more detail, the above-mentioned two performance metrics are improved up to 8%, as compared to our previous work, and 14% compared to existing schemes, respectively.

  9. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    PubMed

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  10. Dissolution curve comparisons through the F(2) parameter, a Bayesian extension of the f(2) statistic.

    PubMed

    Novick, Steven; Shen, Yan; Yang, Harry; Peterson, John; LeBlond, Dave; Altan, Stan

    2015-01-01

    Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.

  11. Fuzzy Logic-Based Guaranteed Lifetime Protocol for Real-Time Wireless Sensor Networks

    PubMed Central

    Shah, Babar; Iqbal, Farkhund; Abbas, Ali; Kim, Ki-Il

    2015-01-01

    Few techniques for guaranteeing a network lifetime have been proposed despite its great impact on network management. Moreover, since the existing schemes are mostly dependent on the combination of disparate parameters, they do not provide additional services, such as real-time communications and balanced energy consumption among sensor nodes; thus, the adaptability problems remain unresolved among nodes in wireless sensor networks (WSNs). To solve these problems, we propose a novel fuzzy logic model to provide real-time communication in a guaranteed WSN lifetime. The proposed fuzzy logic controller accepts the input descriptors energy, time and velocity to determine each node’s role for the next duration and the next hop relay node for real-time packets. Through the simulation results, we verified that both the guaranteed network’s lifetime and real-time delivery are efficiently ensured by the new fuzzy logic model. In more detail, the above-mentioned two performance metrics are improved up to 8%, as compared to our previous work, and 14% compared to existing schemes, respectively. PMID:26295238

  12. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  13. Dynamics of a multimode semiconductor laser with optical feedback

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koryukin, I. V.

    A new model of a multi-longitudinal-mode semiconductor laser with weak optical feedback is proposed. This model generalizes the well-known Tang-Statz-deMars equations, which are derived from the first principles and adequately describe solid-state lasers to a semiconductor active medium. Steady states of the model and the spectrum of relaxation oscillations are found, and the laser dynamics in the chaotic regime of low-frequency fluctuations of intensity is investigated. It is established that the dynamic properties of the proposed model depend mainly on the carrier diffusion, which controls mode-mode coupling in the active medium via spread of gratings of spatial inversion. The resultsmore » obtained are compared with the predictions of previous semiphenomenological models and the scope of applicability of these models is determined.« less

  14. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions.

    PubMed

    Shareef, Hussain; Mutlag, Ammar Hussein; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland-Altman test, with more than 95 percent acceptability.

  15. Random Forest-Based Approach for Maximum Power Point Tracking of Photovoltaic Systems Operating under Actual Environmental Conditions

    PubMed Central

    Shareef, Hussain; Mohamed, Azah

    2017-01-01

    Many maximum power point tracking (MPPT) algorithms have been developed in recent years to maximize the produced PV energy. These algorithms are not sufficiently robust because of fast-changing environmental conditions, efficiency, accuracy at steady-state value, and dynamics of the tracking algorithm. Thus, this paper proposes a new random forest (RF) model to improve MPPT performance. The RF model has the ability to capture the nonlinear association of patterns between predictors, such as irradiance and temperature, to determine accurate maximum power point. A RF-based tracker is designed for 25 SolarTIFSTF-120P6 PV modules, with the capacity of 3 kW peak using two high-speed sensors. For this purpose, a complete PV system is modeled using 300,000 data samples and simulated using the MATLAB/SIMULINK package. The proposed RF-based MPPT is then tested under actual environmental conditions for 24 days to validate the accuracy and dynamic response. The response of the RF-based MPPT model is also compared with that of the artificial neural network and adaptive neurofuzzy inference system algorithms for further validation. The results show that the proposed MPPT technique gives significant improvement compared with that of other techniques. In addition, the RF model passes the Bland–Altman test, with more than 95 percent acceptability. PMID:28702051

  16. Nonlinear Model Reduction in Power Systems by Balancing of Empirical Controllability and Observability Covariances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Wang, Jianhui; Liu, Hui

    Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less

  17. Prediction of frozen food properties during freezing using product composition.

    PubMed

    Boonsupthip, W; Heldman, D R

    2007-06-01

    Frozen water fraction (FWF), as a function of temperature, is an important parameter for use in the design of food freezing processes. An FWF-prediction model, based on concentrations and molecular weights of specific product components, has been developed. Published food composition data were used to determine the identity and composition of key components. The model proposed in this investigation had been verified using published experimental FWF data and initial freezing temperature data, and by comparison to outputs from previously published models. It was found that specific food components with significant influence on freezing temperature depression of food products included low molecular weight water-soluble compounds with molality of 50 micromol per 100 g food or higher. Based on an analysis of 200 high-moisture food products, nearly 45% of the experimental initial freezing temperature data were within an absolute difference (AD) of +/- 0.15 degrees C and standard error (SE) of +/- 0.65 degrees C when compared to values predicted by the proposed model. The predicted relationship between temperature and FWF for all analyzed food products provided close agreements with experimental data (+/- 0.06 SE). The proposed model provided similar prediction capability for high- and intermediate-moisture food products. In addition, the proposed model provided statistically better prediction of initial freezing temperature and FWF than previous published models.

  18. On nonlocally interacting metrics, and a simple proposal for cosmic acceleration

    NASA Astrophysics Data System (ADS)

    Vardanyan, Valeri; Akrami, Yashar; Amendola, Luca; Silvestri, Alessandra

    2018-03-01

    We propose a simple, nonlocal modification to general relativity (GR) on large scales, which provides a model of late-time cosmic acceleration in the absence of the cosmological constant and with the same number of free parameters as in standard cosmology. The model is motivated by adding to the gravity sector an extra spin-2 field interacting nonlocally with the physical metric coupled to matter. The form of the nonlocal interaction is inspired by the simplest form of the Deser-Woodard (DW) model, α R1/squareR, with one of the Ricci scalars being replaced by a constant m2, and gravity is therefore modified in the infrared by adding a simple term of the form m21/squareR to the Einstein-Hilbert term. We study cosmic expansion histories, and demonstrate that the new model can provide background expansions consistent with observations if m is of the order of the Hubble expansion rate today, in contrast to the simple DW model with no viable cosmology. The model is best fit by w0~‑1.075 and wa~0.045. We also compare the cosmology of the model to that of Maggiore and Mancarella (MM), m2R1/square2R, and demonstrate that the viable cosmic histories follow the standard-model evolution more closely compared to the MM model. We further demonstrate that the proposed model possesses the same number of physical degrees of freedom as in GR. Finally, we discuss the appearance of ghosts in the local formulation of the model, and argue that they are unphysical and harmless to the theory, keeping the physical degrees of freedom healthy.

  19. Stochastic Multi-Timescale Power System Operations With Variable Wind Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hongyu; Krad, Ibrahim; Florita, Anthony

    This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less

  20. A variable capacitance based modeling and power capability predicting method for ultracapacitor

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Wang, Yujie; Chen, Zonghai; Ling, Qiang

    2018-01-01

    Methods of accurate modeling and power capability predicting for ultracapacitors are of great significance in management and application of lithium-ion battery/ultracapacitor hybrid energy storage system. To overcome the simulation error coming from constant capacitance model, an improved ultracapacitor model based on variable capacitance is proposed, where the main capacitance varies with voltage according to a piecewise linear function. A novel state-of-charge calculation approach is developed accordingly. After that, a multi-constraint power capability prediction is developed for ultracapacitor, in which a Kalman-filter-based state observer is designed for tracking ultracapacitor's real-time behavior. Finally, experimental results verify the proposed methods. The accuracy of the proposed model is verified by terminal voltage simulating results under different temperatures, and the effectiveness of the designed observer is proved by various test conditions. Additionally, the power capability prediction results of different time scales and temperatures are compared, to study their effects on ultracapacitor's power capability.

  1. The comparative evaluation of expanded national immunization policies in Korea using an analytic hierarchy process.

    PubMed

    Shin, Taeksoo; Kim, Chun-Bae; Ahn, Yang-Heui; Kim, Hyo-Youl; Cha, Byung Ho; Uh, Young; Lee, Joo-Heon; Hyun, Sook-Jung; Lee, Dong-Han; Go, Un-Yeong

    2009-01-29

    The purpose of this paper is to propose new evaluation criteria and an analytic hierarchy process (AHP) model to assess the expanded national immunization programs (ENIPs) and to evaluate two alternative health care policies. One of the alternative policies is that private clinics and hospitals would offer free vaccination services to children and the other of them is that public health centers would offer these free vaccination services. Our model to evaluate the ENIPs was developed using brainstorming, Delphi techniques, and the AHP model. We first used the brainstorming and Delphi techniques, as well as literature reviews, to determine 25 criteria with which to evaluate the national immunization policy; we then proposed a hierarchical structure of the AHP model to assess ENIPs. By applying the proposed AHP model to the assessment of ENIPs for Korean immunization policies, we show that free vaccination services should be provided by private clinics and hospitals rather than public health centers.

  2. Improved Neural Networks with Random Weights for Short-Term Load Forecasting

    PubMed Central

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825

  3. Improved Neural Networks with Random Weights for Short-Term Load Forecasting.

    PubMed

    Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo

    2015-01-01

    An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.

  4. A novel application of artificial neural network for wind speed estimation

    NASA Astrophysics Data System (ADS)

    Fang, Da; Wang, Jianzhou

    2017-05-01

    Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.

  5. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    PubMed

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  6. CuBe: parametric modeling of 3D foveal shape using cubic Bézier

    PubMed Central

    Yadav, Sunil Kumar; Motamedi, Seyedamirhosein; Oberwahrenbrock, Timm; Oertel, Frederike Cosima; Polthier, Konrad; Paul, Friedemann; Kadas, Ella Maria; Brandt, Alexander U.

    2017-01-01

    Optical coherence tomography (OCT) allows three-dimensional (3D) imaging of the retina, and is commonly used for assessing pathological changes of fovea and macula in many diseases. Many neuroinflammatory conditions are known to cause modifications to the fovea shape. In this paper, we propose a method for parametric modeling of the foveal shape. Our method exploits invariant features of the macula from OCT data and applies a cubic Bézier polynomial along with a least square optimization to produce a best fit parametric model of the fovea. Additionally, we provide several parameters of the foveal shape based on the proposed 3D parametric modeling. Our quantitative and visual results show that the proposed model is not only able to reconstruct important features from the foveal shape, but also produces less error compared to the state-of-the-art methods. Finally, we apply the model in a comparison of healthy control eyes and eyes from patients with neuroinflammatory central nervous system disorders and optic neuritis, and show that several derived model parameters show significant differences between the two groups. PMID:28966857

  7. Hysteresis modeling of magnetic shape memory alloy actuator based on Krasnosel'skii-Pokrovskii model.

    PubMed

    Zhou, Miaolei; Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.

  8. Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model

    PubMed Central

    Wang, Shoubin; Gao, Wei

    2013-01-01

    As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730

  9. An Odds Ratio Approach for Detecting DDF under the Nested Logit Modeling Framework

    ERIC Educational Resources Information Center

    Terzi, Ragip; Suh, Youngsuk

    2015-01-01

    An odds ratio approach (ORA) under the framework of a nested logit model was proposed for evaluating differential distractor functioning (DDF) in multiple-choice items and was compared with an existing ORA developed under the nominal response model. The performances of the two ORAs for detecting DDF were investigated through an extensive…

  10. Distributed plug-and-play optimal generator and load control for power system frequency regulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Changhong; Mallada, Enrique; Low, Steven H.

    A distributed control scheme, which can be implemented on generators and controllable loads in a plug-and-play manner, is proposed for power system frequency regulation. The proposed scheme is based on local measurements, local computation, and neighborhood information exchanges over a communication network with an arbitrary (but connected) topology. In the event of a sudden change in generation or load, the proposed scheme can restore the nominal frequency and the reference inter-area power flows, while minimizing the total cost of control for participating generators and loads. Power network stability under the proposed control is proved with a relatively realistic model whichmore » includes nonlinear power flow and a generic (potentially nonlinear or high-order) turbine-governor model, and further with first- and second-order turbine-governor models as special cases. Finally, in simulations, the proposed control scheme shows a comparable performance to the existing automatic generation control (AGC) when implemented only on the generator side, and demonstrates better dynamic characteristics than AGC when each scheme is implemented on both generators and controllable loads. Simulation results also show robustness of the proposed scheme to communication link failure.« less

  11. Distributed plug-and-play optimal generator and load control for power system frequency regulation

    DOE PAGES

    Zhao, Changhong; Mallada, Enrique; Low, Steven H.; ...

    2018-03-14

    A distributed control scheme, which can be implemented on generators and controllable loads in a plug-and-play manner, is proposed for power system frequency regulation. The proposed scheme is based on local measurements, local computation, and neighborhood information exchanges over a communication network with an arbitrary (but connected) topology. In the event of a sudden change in generation or load, the proposed scheme can restore the nominal frequency and the reference inter-area power flows, while minimizing the total cost of control for participating generators and loads. Power network stability under the proposed control is proved with a relatively realistic model whichmore » includes nonlinear power flow and a generic (potentially nonlinear or high-order) turbine-governor model, and further with first- and second-order turbine-governor models as special cases. Finally, in simulations, the proposed control scheme shows a comparable performance to the existing automatic generation control (AGC) when implemented only on the generator side, and demonstrates better dynamic characteristics than AGC when each scheme is implemented on both generators and controllable loads. Simulation results also show robustness of the proposed scheme to communication link failure.« less

  12. Anion exchange membrane fuel cell modelling

    NASA Astrophysics Data System (ADS)

    Fragiacomo, P.; Astorino, E.; Chippari, G.; De Lorenzo, G.; Czarnetzki, W. T.; Schneider, W.

    2018-04-01

    A parametric model predicting the performance of a solid polymer electrolyte, anion exchange membrane fuel cell (AEMFC), has been developed, in Matlab environment, based on interrelated electrical and thermal models. The electrical model proposed is developed by modelling an AEMFC open-circuit output voltage, irreversible voltage losses along with a mass balance, while the thermal model is based on the energy balance. The proposed model of the AEMFC stack estimates its dynamic behaviour, in particular the operating temperature variation for different discharge current values. The results of the theoretical fuel cell (FC) stack are reported and analysed in order to highlight the FC performance and how it varies by changing the values of some parameters such as temperature and pressure. Both the electrical and thermal FC models were validated by comparing the model results with experimental data and the results of other models found in the literature.

  13. Comparative Risk Predictions of Second Cancers After Carbon-Ion Therapy Versus Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eley, John G., E-mail: jeley@som.umaryland.edu; University of Texas Graduate School of Biomedical Sciences, Houston, Texas; Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland

    Purpose: This work proposes a theoretical framework that enables comparative risk predictions for second cancer incidence after particle beam therapy for different ion species for individual patients, accounting for differences in relative biological effectiveness (RBE) for the competing processes of tumor initiation and cell inactivation. Our working hypothesis was that use of carbon-ion therapy instead of proton therapy would show a difference in the predicted risk of second cancer incidence in the breast for a sample of Hodgkin lymphoma (HL) patients. Methods and Materials: We generated biologic treatment plans and calculated relative predicted risks of second cancer in the breastmore » by using two proposed methods: a full model derived from the linear quadratic model and a simpler linear-no-threshold model. Results: For our reference calculation, we found the predicted risk of breast cancer incidence for carbon-ion plans-to-proton plan ratio, , to be 0.75 ± 0.07 but not significantly smaller than 1 (P=.180). Conclusions: Our findings suggest that second cancer risks are, on average, comparable between proton therapy and carbon-ion therapy.« less

  14. Comparative Risk Predictions of Second Cancers After Carbon-Ion Therapy Versus Proton Therapy.

    PubMed

    Eley, John G; Friedrich, Thomas; Homann, Kenneth L; Howell, Rebecca M; Scholz, Michael; Durante, Marco; Newhauser, Wayne D

    2016-05-01

    This work proposes a theoretical framework that enables comparative risk predictions for second cancer incidence after particle beam therapy for different ion species for individual patients, accounting for differences in relative biological effectiveness (RBE) for the competing processes of tumor initiation and cell inactivation. Our working hypothesis was that use of carbon-ion therapy instead of proton therapy would show a difference in the predicted risk of second cancer incidence in the breast for a sample of Hodgkin lymphoma (HL) patients. We generated biologic treatment plans and calculated relative predicted risks of second cancer in the breast by using two proposed methods: a full model derived from the linear quadratic model and a simpler linear-no-threshold model. For our reference calculation, we found the predicted risk of breast cancer incidence for carbon-ion plans-to-proton plan ratio, , to be 0.75 ± 0.07 but not significantly smaller than 1 (P=.180). Our findings suggest that second cancer risks are, on average, comparable between proton therapy and carbon-ion therapy. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Spread of large LNG pools on the sea.

    PubMed

    Fay, J A

    2007-02-20

    A review of the standard model of LNG pool spreading on water, comparing it with the model and experiments on oil pool spread from which the LNG model is extrapolated, raises questions about the validity of the former as applied to spills from marine tankers. These questions arise from the difference in fluid density ratios, in the multi-dimensional flow at the pool edge, in the effects of LNG pool boiling at the LNG-water interface, and in the model and experimental initial conditions compared with the inflow conditions from a marine tanker spill. An alternate supercritical flow model is proposed that avoids these difficulties; it predicts significant increase in the maximum pool radius compared with the standard model and is partially corroborated by tests of LNG pool fires on water. Wind driven ocean wave interaction has little effect on either spread model.

  16. Performance Enhancement for a GPS Vector-Tracking Loop Utilizing an Adaptive Iterated Extended Kalman Filter

    PubMed Central

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-01-01

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively. PMID:25502124

  17. Performance enhancement for a GPS vector-tracking loop utilizing an adaptive iterated extended Kalman filter.

    PubMed

    Chen, Xiyuan; Wang, Xiying; Xu, Yuan

    2014-12-09

    This paper deals with the problem of state estimation for the vector-tracking loop of a software-defined Global Positioning System (GPS) receiver. For a nonlinear system that has the model error and white Gaussian noise, a noise statistics estimator is used to estimate the model error, and based on this, a modified iterated extended Kalman filter (IEKF) named adaptive iterated Kalman filter (AIEKF) is proposed. A vector-tracking GPS receiver utilizing AIEKF is implemented to evaluate the performance of the proposed method. Through road tests, it is shown that the proposed method has an obvious accuracy advantage over the IEKF and Adaptive Extended Kalman filter (AEKF) in position determination. The results show that the proposed method is effective to reduce the root-mean-square error (RMSE) of position (including longitude, latitude and altitude). Comparing with EKF, the position RMSE values of AIEKF are reduced by about 45.1%, 40.9% and 54.6% in the east, north and up directions, respectively. Comparing with IEKF, the position RMSE values of AIEKF are reduced by about 25.7%, 19.3% and 35.7% in the east, north and up directions, respectively. Compared with AEKF, the position RMSE values of AIEKF are reduced by about 21.6%, 15.5% and 30.7% in the east, north and up directions, respectively.

  18. Development of sustainable precision farming systems for swine: estimating real-time individual amino acid requirements in growing-finishing pigs.

    PubMed

    Hauschild, L; Lovatto, P A; Pomar, J; Pomar, C

    2012-07-01

    The objective of this study was to develop and evaluate a mathematical model used to estimate the daily amino acid requirements of individual growing-finishing pigs. The model includes empirical and mechanistic model components. The empirical component estimates daily feed intake (DFI), BW, and daily gain (DG) based on individual pig information collected in real time. Based on DFI, BW, and DG estimates, the mechanistic component uses classic factorial equations to estimate the optimal concentration of amino acids that must be offered to each pig to meet its requirements. The model was evaluated with data from a study that investigated the effect of feeding pigs with a 3-phase or daily multiphase system. The DFI and BW values measured in this study were compared with those estimated by the empirical component of the model. The coherence of the values estimated by the mechanistic component was evaluated by analyzing if it followed a normal pattern of requirements. Lastly, the proposed model was evaluated by comparing its estimates with those generated by the existing growth model (InraPorc). The precision of the proposed model and InraPorc in estimating DFI and BW was evaluated through the mean absolute error. The empirical component results indicated that the DFI and BW trajectories of individual pigs fed ad libitum could be predicted 1 d (DFI) or 7 d (BW) ahead with the average mean absolute error of 12.45 and 1.85%, respectively. The average mean absolute error obtained with the InraPorc for the average individual of the population was 14.72% for DFI and 5.38% for BW. Major differences were observed when estimates from InraPorc were compared with individual observations. The proposed model, however, was effective in tracking the change in DFI and BW for each individual pig. The mechanistic model component estimated the optimal standardized ileal digestible Lys to NE ratio with reasonable between animal (average CV = 7%) and overtime (average CV = 14%) variation. Thus, the amino acid requirements estimated by model are animal- and time-dependent and follow, in real time, the individual DFI and BW growth patterns. The proposed model can follow the average feed intake and feed weight trajectory of each individual pig in real time with good accuracy. Based on these trajectories and using classical factorial equations, the model makes it possible to estimate dynamically the AA requirements of each animal, taking into account the intake and growth changes of the animal.

  19. Critical Factors Analysis for Offshore Software Development Success by Structural Equation Modeling

    NASA Astrophysics Data System (ADS)

    Wada, Yoshihisa; Tsuji, Hiroshi

    In order to analyze the success/failure factors in offshore software development service by the structural equation modeling, this paper proposes to follow two approaches together; domain knowledge based heuristic analysis and factor analysis based rational analysis. The former works for generating and verifying of hypothesis to find factors and causalities. The latter works for verifying factors introduced by theory to build the model without heuristics. Following the proposed combined approaches for the responses from skilled project managers of the questionnaire, this paper found that the vendor property has high causality for the success compared to software property and project property.

  20. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  1. Evaluation of three inverse problem models to quantify skin microcirculation using diffusion-weighted MRI

    NASA Astrophysics Data System (ADS)

    Cordier, G.; Choi, J.; Raguin, L. G.

    2008-11-01

    Skin microcirculation plays an important role in diseases such as chronic venous insufficiency and diabetes. Magnetic resonance imaging (MRI) can provide quantitative information with a better penetration depth than other noninvasive methods, such as laser Doppler flowmetry or optical coherence tomography. Moreover, successful MRI skin studies have recently been reported. In this article, we investigate three potential inverse models to quantify skin microcirculation using diffusion-weighted MRI (DWI), also known as q-space MRI. The model parameters are estimated based on nonlinear least-squares (NLS). For each of the three models, an optimal DWI sampling scheme is proposed based on D-optimality in order to minimize the size of the confidence region of the NLS estimates and thus the effect of the experimental noise inherent to DWI. The resulting covariance matrices of the NLS estimates are predicted by asymptotic normality and compared to the ones computed by Monte-Carlo simulations. Our numerical results demonstrate the effectiveness of the proposed models and corresponding DWI sampling schemes as compared to conventional approaches.

  2. A mathematical modeling approach to resource allocation for railroad-highway crossing safety upgrades.

    PubMed

    Konur, Dinçer; Golias, Mihalis M; Darks, Brandon

    2013-03-01

    State Departments of Transportation (S-DOT's) periodically allocate budget for safety upgrades at railroad-highway crossings. Efficient resource allocation is crucial for reducing accidents at railroad-highway crossings and increasing railroad as well as highway transportation safety. While a specific method is not restricted to S-DOT's, sorting type of procedures are recommended by the Federal Railroad Administration (FRA), United States Department of Transportation for the resource allocation problem. In this study, a generic mathematical model is proposed for the resource allocation problem for railroad-highway crossing safety upgrades. The proposed approach is compared to sorting based methods for safety upgrades of public at-grade railroad-highway crossings in Tennessee. The comparison shows that the proposed mathematical modeling approach is more efficient than sorting methods in reducing accidents and severity. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Smith predictor based-sliding mode controller for integrating processes with elevated deadtime.

    PubMed

    Camacho, Oscar; De la Cruz, Francisco

    2004-04-01

    An approach to control integrating processes with elevated deadtime using a Smith predictor sliding mode controller is presented. A PID sliding surface and an integrating first-order plus deadtime model have been used to synthesize the controller. Since the performance of existing controllers with a Smith predictor decrease in the presence of modeling errors, this paper presents a simple approach to combining the Smith predictor with the sliding mode concept, which is a proven, simple, and robust procedure. The proposed scheme has a set of tuning equations as a function of the characteristic parameters of the model. For implementation of our proposed approach, computer based industrial controllers that execute PID algorithms can be used. The performance and robustness of the proposed controller are compared with the Matausek-Micić scheme for linear systems using simulations.

  4. Hybrid artificial intelligence approach based on neural fuzzy inference model and metaheuristic optimization for flood susceptibilitgy modeling in a high-frequency tropical cyclone area using GIS

    NASA Astrophysics Data System (ADS)

    Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi

    2016-09-01

    This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.

  5. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    NASA Astrophysics Data System (ADS)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  6. Adaptive classifier for steel strip surface defects

    NASA Astrophysics Data System (ADS)

    Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li

    2017-01-01

    Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.

  7. Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks

    PubMed Central

    Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo

    2012-01-01

    Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190

  8. Performance evaluation of Olympic weightlifters.

    PubMed

    Garhammer, J

    1979-01-01

    The comparison of weights lifted by athletes in different bodyweight categories is a continuing problem for the sport of olympic weightlifting. An objective mechanical evaluation procedure was developed using basic ideas from a model proposed by Ranta in 1975. This procedure was based on more realistic assumptions than the original model and considered both vertical and horizontal bar movements. Utilization of data obtained from film of national caliber lifters indicated that the proposed method was workable, and that the evaluative indices ranked lifters in reasonable order relative to other comparative techniques.

  9. Quantum chemical study of the mechanism of action of vitamin K epoxide reductase (VKOR)

    NASA Astrophysics Data System (ADS)

    Deerfield, David, II; Davis, Charles H.; Wymore, Troy; Stafford, Darrel W.; Pedersen, Lee G.

    Possible model, but simplistic, mechanisms for the action of vitamin K epoxide reductase (VKOR) are investigated with quantum mechanical methods (B3LYP/6-311G**). The geometries of proposed model intermediates in the mechanisms are energy optimized. Finally, the energetics of the proposed (pseudo-enzymatic) pathways are compared. We find that the several pathways are all energetically feasible. These results will be useful for designing quantum mechanical/molecular mechanical method (QM/MM) studies of the enzymatic pathway once three-dimensional structural data are determined and available for VKOR.

  10. Recovering hidden diagonal structures via non-negative matrix factorization with multiple constraints.

    PubMed

    Yang, Xi; Han, Guoqiang; Cai, Hongmin; Song, Yan

    2017-03-31

    Revealing data with intrinsically diagonal block structures is particularly useful for analyzing groups of highly correlated variables. Earlier researches based on non-negative matrix factorization (NMF) have been shown to be effective in representing such data by decomposing the observed data into two factors, where one factor is considered to be the feature and the other the expansion loading from a linear algebra perspective. If the data are sampled from multiple independent subspaces, the loading factor would possess a diagonal structure under an ideal matrix decomposition. However, the standard NMF method and its variants have not been reported to exploit this type of data via direct estimation. To address this issue, a non-negative matrix factorization with multiple constraints model is proposed in this paper. The constraints include an sparsity norm on the feature matrix and a total variational norm on each column of the loading matrix. The proposed model is shown to be capable of efficiently recovering diagonal block structures hidden in observed samples. An efficient numerical algorithm using the alternating direction method of multipliers model is proposed for optimizing the new model. Compared with several benchmark models, the proposed method performs robustly and effectively for simulated and real biological data.

  11. PM2.5 forecasting using SVR with PSOGSA algorithm based on CEEMD, GRNN and GCA considering meteorological factors

    NASA Astrophysics Data System (ADS)

    Zhu, Suling; Lian, Xiuyuan; Wei, Lin; Che, Jinxing; Shen, Xiping; Yang, Ling; Qiu, Xuanlin; Liu, Xiaoning; Gao, Wenlong; Ren, Xiaowei; Li, Juansheng

    2018-06-01

    The PM2.5 is the culprit of air pollution, and it leads to respiratory system disease when the fine particles are inhaled. Therefore, it is increasingly significant to develop an effective model for PM2.5 forecasting and warnings that informs people to foresee the air quality. People can reduce outdoor activities and take preventive measures if they know the air quality is bad ahead of time. In addition, reliable forecasting results can remind the relevant departments to control and reduce pollutants discharge. According to our knowledge, the current hybrid forecasting techniques of PM2.5 do not take the meteorological factors into consideration. Actually, meteorological factors affect the concentrations of air pollution, but it is unclear whether meteorological factors are helpful for improving the PM2.5 forecasting results or not. This paper proposes a hybrid model called CEEMD-PSOGSA-SVR-GRNN, based on complementary ensemble empirical mode decomposition (CEEMD), particle swarm optimization and gravitational search algorithm (PSOGSA), support vector regression (SVR), generalized regression neural network (GRNN) and grey correlation analysis (GCA), for the daily PM2.5 concentrations forecasting. The main steps of proposed model are described as follows: the original PM2.5 data decomposition with CEEMD, optimal SVR selection with PSOGCA, meteorological factors selection with GCA, residual revision by GRNN and forecasting results analysis. Three cities (Chongqing, Harbin and Jinan) in China with different characteristics of climate, terrain and pollution sources are selected to verify the effectiveness of proposed model, and CEEMD-PSOGSA-SVR*, EEMD-PSOGSA-SVR, PSOGSA-SVR, CEEMD-PSO-SVR, CEEMD-GSA-SVR, CEEMD-GWO-SVR are considered to be compared models. The experimental results show that the hybrid CEEMD-PSOGSA-SVR-GRNN model outperforms other six compared models. Therefore, the proposed CEEMD-PSOGSA-SVR-GRNN model can be used to develop air quality forecasting and warnings.

  12. Feature selection and classifier parameters estimation for EEG signals peak detection using particle swarm optimization.

    PubMed

    Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan

    2014-01-01

    Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.

  13. Feature-opinion pair identification of product reviews in Chinese: a domain ontology modeling method

    NASA Astrophysics Data System (ADS)

    Yin, Pei; Wang, Hongwei; Guo, Kaiqiang

    2013-03-01

    With the emergence of the new economy based on social media, a great amount of consumer feedback on particular products are conveyed through wide-spreading product online reviews, making opinion mining a growing interest for both academia and industry. According to the characteristic mode of expression in Chinese, this research proposes an ontology-based linguistic model to identify the basic appraisal expression in Chinese product reviews-"feature-opinion pair (FOP)." The product-oriented domain ontology is constructed automatically at first, then algorithms to identify FOP are designed by mapping product features and opinions to the conceptual space of the domain ontology, and finally comparative experiments are conducted to evaluate the model. Experimental results indicate that the performance of the proposed approach in this paper is efficient in obtaining a more accurate result compared to the state-of-art algorithms. Furthermore, through identifying and analyzing FOPs, the unstructured product reviews are converted into structured and machine-sensible expression, which provides valuable information for business application. This paper contributes to the related research in opinion mining by developing a solid foundation for further sentiment analysis at a fine-grained level and proposing a general way for automatic ontology construction.

  14. Three-Class Mammogram Classification Based on Descriptive CNN Features

    PubMed Central

    Zhang, Qianni; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques. PMID:28191461

  15. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    PubMed

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  16. Defect detection of castings in radiography images using a robust statistical feature.

    PubMed

    Zhao, Xinyue; He, Zaixing; Zhang, Shuyou

    2014-01-01

    One of the most commonly used optical methods for defect detection is radiographic inspection. Compared with methods that extract defects directly from the radiography image, model-based methods deal with the case of an object with complex structure well. However, detection of small low-contrast defects in nonuniformly illuminated images is still a major challenge for them. In this paper, we present a new method based on the grayscale arranging pairs (GAP) feature to detect casting defects in radiography images automatically. First, a model is built using pixel pairs with a stable intensity relationship based on the GAP feature from previously acquired images. Second, defects can be extracted by comparing the difference of intensity-difference signs between the input image and the model statistically. The robustness of the proposed method to noise and illumination variations has been verified on casting radioscopic images with defects. The experimental results showed that the average computation time of the proposed method in the testing stage is 28 ms per image on a computer with a Pentium Core 2 Duo 3.00 GHz processor. For the comparison, we also evaluated the performance of the proposed method as well as that of the mixture-of-Gaussian-based and crossing line profile methods. The proposed method achieved 2.7% and 2.0% false negative rates in the noise and illumination variation experiments, respectively.

  17. An Ontology of Power: Perception and Reality in Conflict

    DTIC Science & Technology

    2016-12-01

    synthetic model was developed as the constant comparative analysis was resumed through the application of selected theory toward the original source...The synthetic model represents a series of maxims for the analysis of a complex social system, developed through a study of contemporary national...and categories. A model of strategic agency is proposed as an alternative framework for developing security strategy. The strategic agency model draws

  18. A hybrid patient-specific biomechanical model based image registration method for the motion estimation of lungs.

    PubMed

    Han, Lianghao; Dong, Hua; McClelland, Jamie R; Han, Liangxiu; Hawkes, David J; Barratt, Dean C

    2017-07-01

    This paper presents a new hybrid biomechanical model-based non-rigid image registration method for lung motion estimation. In the proposed method, a patient-specific biomechanical modelling process captures major physically realistic deformations with explicit physical modelling of sliding motion, whilst a subsequent non-rigid image registration process compensates for small residuals. The proposed algorithm was evaluated with 10 4D CT datasets of lung cancer patients. The target registration error (TRE), defined as the Euclidean distance of landmark pairs, was significantly lower with the proposed method (TRE = 1.37 mm) than with biomechanical modelling (TRE = 3.81 mm) and intensity-based image registration without specific considerations for sliding motion (TRE = 4.57 mm). The proposed method achieved a comparable accuracy as several recently developed intensity-based registration algorithms with sliding handling on the same datasets. A detailed comparison on the distributions of TREs with three non-rigid intensity-based algorithms showed that the proposed method performed especially well on estimating the displacement field of lung surface regions (mean TRE = 1.33 mm, maximum TRE = 5.3 mm). The effects of biomechanical model parameters (such as Poisson's ratio, friction and tissue heterogeneity) on displacement estimation were investigated. The potential of the algorithm in optimising biomechanical models of lungs through analysing the pattern of displacement compensation from the image registration process has also been demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Ranking of Business Process Simulation Software Tools with DEX/QQ Hierarchical Decision Model

    PubMed Central

    2016-01-01

    The omnipresent need for optimisation requires constant improvements of companies’ business processes (BPs). Minimising the risk of inappropriate BP being implemented is usually performed by simulating the newly developed BP under various initial conditions and “what-if” scenarios. An effectual business process simulations software (BPSS) is a prerequisite for accurate analysis of an BP. Characterisation of an BPSS tool is a challenging task due to the complex selection criteria that includes quality of visual aspects, simulation capabilities, statistical facilities, quality reporting etc. Under such circumstances, making an optimal decision is challenging. Therefore, various decision support models are employed aiding the BPSS tool selection. The currently established decision support models are either proprietary or comprise only a limited subset of criteria, which affects their accuracy. Addressing this issue, this paper proposes a new hierarchical decision support model for ranking of BPSS based on their technical characteristics by employing DEX and qualitative to quantitative (QQ) methodology. Consequently, the decision expert feeds the required information in a systematic and user friendly manner. There are three significant contributions of the proposed approach. Firstly, the proposed hierarchical model is easily extendible for adding new criteria in the hierarchical structure. Secondly, a fully operational decision support system (DSS) tool that implements the proposed hierarchical model is presented. Finally, the effectiveness of the proposed hierarchical model is assessed by comparing the resulting rankings of BPSS with respect to currently available results. PMID:26871694

  20. A new model for fluid velocity slip on a solid surface.

    PubMed

    Shu, Jian-Jun; Teo, Ji Bin Melvin; Chan, Weng Kong

    2016-10-12

    A general adsorption model is developed to describe the interactions between near-wall fluid molecules and solid surfaces. This model serves as a framework for the theoretical modelling of boundary slip phenomena. Based on this adsorption model, a new general model for the slip velocity of fluids on solid surfaces is introduced. The slip boundary condition at a fluid-solid interface has hitherto been considered separately for gases and liquids. In this paper, we show that the slip velocity in both gases and liquids may originate from dynamical adsorption processes at the interface. A unified analytical model that is valid for both gas-solid and liquid-solid slip boundary conditions is proposed based on surface science theory. The corroboration with the experimental data extracted from the literature shows that the proposed model provides an improved prediction compared to existing analytical models for gases at higher shear rates and close agreement for liquid-solid interfaces in general.

  1. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    PubMed

    Yu, Kezi; Quirk, J Gerald; Djurić, Petar M

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting.

  2. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models

    PubMed Central

    Yu, Kezi; Quirk, J. Gerald

    2017-01-01

    In this paper, we propose an application of non-parametric Bayesian (NPB) models for classification of fetal heart rate (FHR) recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP) and the Chinese restaurant process with finite capacity (CRFC). Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR) recordings in a real-time setting. PMID:28953927

  3. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) can not fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First,more » the modeling error PDF by the tradional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. Furthermore, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  4. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Ping; Wang, Chenyu; Li, Mingjie

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  5. Modeling error PDF optimization based wavelet neural network modeling of dynamic system and its application in blast furnace ironmaking

    DOE PAGES

    Zhou, Ping; Wang, Chenyu; Li, Mingjie; ...

    2018-01-31

    In general, the modeling errors of dynamic system model are a set of random variables. The traditional performance index of modeling such as means square error (MSE) and root means square error (RMSE) cannot fully express the connotation of modeling errors with stochastic characteristics both in the dimension of time domain and space domain. Therefore, the probability density function (PDF) is introduced to completely describe the modeling errors in both time scales and space scales. Based on it, a novel wavelet neural network (WNN) modeling method is proposed by minimizing the two-dimensional (2D) PDF shaping of modeling errors. First, themore » modeling error PDF by the traditional WNN is estimated using data-driven kernel density estimation (KDE) technique. Then, the quadratic sum of 2D deviation between the modeling error PDF and the target PDF is utilized as performance index to optimize the WNN model parameters by gradient descent method. Since the WNN has strong nonlinear approximation and adaptive capability, and all the parameters are well optimized by the proposed method, the developed WNN model can make the modeling error PDF track the target PDF, eventually. Simulation example and application in a blast furnace ironmaking process show that the proposed method has a higher modeling precision and better generalization ability compared with the conventional WNN modeling based on MSE criteria. However, the proposed method has more desirable estimation for modeling error PDF that approximates to a Gaussian distribution whose shape is high and narrow.« less

  6. A Gibbs Energy Minimization Approach for Modeling of Chemical Reactions in a Basic Oxygen Furnace

    NASA Astrophysics Data System (ADS)

    Kruskopf, Ari; Visuri, Ville-Valtteri

    2017-12-01

    In modern steelmaking, the decarburization of hot metal is converted into steel primarily in converter processes, such as the basic oxygen furnace. The objective of this work was to develop a new mathematical model for top blown steel converter, which accounts for the complex reaction equilibria in the impact zone, also known as the hot spot, as well as the associated mass and heat transport. An in-house computer code of the model has been developed in Matlab. The main assumption of the model is that all reactions take place in a specified reaction zone. The mass transfer between the reaction volume, bulk slag, and metal determine the reaction rates for the species. The thermodynamic equilibrium is calculated using the partitioning of Gibbs energy (PGE) method. The activity model for the liquid metal is the unified interaction parameter model and for the liquid slag the modified quasichemical model (MQM). The MQM was validated by calculating iso-activity lines for the liquid slag components. The PGE method together with the MQM was validated by calculating liquidus lines for solid components. The results were compared with measurements from literature. The full chemical reaction model was validated by comparing the metal and slag compositions to measurements from industrial scale converter. The predictions were found to be in good agreement with the measured values. Furthermore, the accuracy of the model was found to compare favorably with the models proposed in the literature. The real-time capability of the proposed model was confirmed in test calculations.

  7. Cost Minimization for Joint Energy Management and Production Scheduling Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Shah, Rahul H.

    Production costs account for the largest share of the overall cost of manufacturing facilities. With the U.S. industrial sector becoming more and more competitive, manufacturers are looking for more cost and resource efficient working practices. Operations management and production planning have shown their capability to dramatically reduce manufacturing costs and increase system robustness. When implementing operations related decision making and planning, two fields that have shown to be most effective are maintenance and energy. Unfortunately, the current research that integrates both is limited. Additionally, these studies fail to consider parameter domains and optimization on joint energy and maintenance driven production planning. Accordingly, production planning methodology that considers maintenance and energy is investigated. Two models are presented to achieve well-rounded operating strategy. The first is a joint energy and maintenance production scheduling model. The second is a cost per part model considering maintenance, energy, and production. The proposed methodology will involve a Time-of-Use electricity demand response program, buffer and holding capacity, station reliability, production rate, station rated power, and more. In practice, the scheduling problem can be used to determine a joint energy, maintenance, and production schedule. Meanwhile, the cost per part model can be used to: (1) test the sensitivity of the obtained optimal production schedule and its corresponding savings by varying key production system parameters; and (2) to determine optimal system parameter combinations when using the joint energy, maintenance, and production planning model. Additionally, a factor analysis on the system parameters is conducted and the corresponding performance of the production schedule under variable parameter conditions, is evaluated. Also, parameter optimization guidelines that incorporate maintenance and energy parameter decision making in the production planning framework are discussed. A modified Particle Swarm Optimization solution technique is adopted to solve the proposed scheduling problem. The algorithm is described in detail and compared to Genetic Algorithm. Case studies are presented to illustrate the benefits of using the proposed model and the effectiveness of the Particle Swarm Optimization approach. Numerical Experiments are implemented and analyzed to test the effectiveness of the proposed model. The proposed scheduling strategy can achieve savings of around 19 to 27 % in cost per part when compared to the baseline scheduling scenarios. By optimizing key production system parameters from the cost per part model, the baseline scenarios can obtain around 20 to 35 % in savings for the cost per part. These savings further increase by 42 to 55 % when system parameter optimization is integrated with the proposed scheduling problem. Using this method, the most influential parameters on the cost per part are the rated power from production, the production rate, and the initial machine reliabilities. The modified Particle Swarm Optimization algorithm adopted allows greater diversity and exploration compared to Genetic Algorithm for the proposed joint model which results in it being more computationally efficient in determining the optimal scheduling. While Genetic Algorithm could achieve a solution quality of 2,279.63 at an expense of 2,300 seconds in computational effort. In comparison, the proposed Particle Swarm Optimization algorithm achieved a solution quality of 2,167.26 in less than half the computation effort which is required by Genetic Algorithm.

  8. Interplay and characterization of Dark Matter searches at colliders and in direct detection experiments

    DOE PAGES

    Malik, Sarah A.; McCabe, Christopher; Araujo, Henrique; ...

    2015-05-18

    In our White Paper we present and discuss a concrete proposal for the consistent interpretation of Dark Matter searches at colliders and in direct detection experiments. Furthermore, based on a specific implementation of simplified models of vector and axial-vector mediator exchanges, this proposal demonstrates how the two search strategies can be compared on an equal footing.

  9. A Modified Johnson-Cook Model for Sheet Metal Forming at Elevated Temperatures and Its Application for Cooled Stress-Strain Curve and Spring-Back Prediction

    NASA Astrophysics Data System (ADS)

    Duc-Toan, Nguyen; Tien-Long, Banh; Young-Suk, Kim; Dong-Won, Jung

    2011-08-01

    In this study, a modified Johnson-Cook (J-C) model and an innovated method to determine (J-C) material parameters are proposed to predict more correctly stress-strain curve for tensile tests in elevated temperatures. A MATLAB tool is used to determine material parameters by fitting a curve to follow Ludwick's hardening law at various elevated temperatures. Those hardening law parameters are then utilized to determine modified (J-C) model material parameters. The modified (J-C) model shows the better prediction compared to the conventional one. As the first verification, an FEM tensile test simulation based on the isotropic hardening model for boron sheet steel at elevated temperatures was carried out via a user-material subroutine, using an explicit finite element code, and compared with the measurements. The temperature decrease of all elements due to the air cooling process was then calculated when considering the modified (J-C) model and coded to VUMAT subroutine for tensile test simulation of cooling process. The modified (J-C) model showed the good agreement between the simulation results and the corresponding experiments. The second investigation was applied for V-bending spring-back prediction of magnesium alloy sheets at elevated temperatures. Here, the combination of proposed J-C model with modified hardening law considering the unusual plastic behaviour for magnesium alloy sheet was adopted for FEM simulation of V-bending spring-back prediction and shown the good comparability with corresponding experiments.

  10. Effect of motor dynamics on nonlinear feedback robot arm control

    NASA Technical Reports Server (NTRS)

    Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping

    1991-01-01

    A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.

  11. Graph-based sensor fusion for classification of transient acoustic signals.

    PubMed

    Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal

    2015-03-01

    Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.

  12. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  13. The admissible portfolio selection problem with transaction costs and an improved PSO algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Wei; Zhang, Wei-Guo

    2010-05-01

    In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.

  14. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Almansouri, Hani; Venkatakrishnan, Singanallur V.; Clayton, Dwight A.

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials beingmore » imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.« less

  15. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    NASA Astrophysics Data System (ADS)

    Almansouri, Hani; Venkatakrishnan, Singanallur; Clayton, Dwight; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  16. A Collective Study on Modeling and Simulation of Resistive Random Access Memory

    NASA Astrophysics Data System (ADS)

    Panda, Debashis; Sahu, Paritosh Piyush; Tseng, Tseung Yuen

    2018-01-01

    In this work, we provide a comprehensive discussion on the various models proposed for the design and description of resistive random access memory (RRAM), being a nascent technology is heavily reliant on accurate models to develop efficient working designs and standardize its implementation across devices. This review provides detailed information regarding the various physical methodologies considered for developing models for RRAM devices. It covers all the important models reported till now and elucidates their features and limitations. Various additional effects and anomalies arising from memristive system have been addressed, and the solutions provided by the models to these problems have been shown as well. All the fundamental concepts of RRAM model development such as device operation, switching dynamics, and current-voltage relationships are covered in detail in this work. Popular models proposed by Chua, HP Labs, Yakopcic, TEAM, Stanford/ASU, Ielmini, Berco-Tseng, and many others have been compared and analyzed extensively on various parameters. The working and implementations of the window functions like Joglekar, Biolek, Prodromakis, etc. has been presented and compared as well. New well-defined modeling concepts have been discussed which increase the applicability and accuracy of the models. The use of these concepts brings forth several improvements in the existing models, which have been enumerated in this work. Following the template presented, highly accurate models would be developed which will vastly help future model developers and the modeling community.

  17. Efficient least angle regression for identification of linear-in-the-parameters models

    PubMed Central

    Beach, Thomas H.; Rezgui, Yacine

    2017-01-01

    Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140

  18. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  19. An Estimating Equations Approach for the LISCOMP Model.

    ERIC Educational Resources Information Center

    Reboussin, Beth A.; Liang, Kung-Lee

    1998-01-01

    A quadratic estimating equations approach for the LISCOMP model is proposed that only requires specification of the first two moments. This method is compared with a three-stage generalized least squares approach through a numerical study and application to a study of life events and neurotic illness. (SLD)

  20. Robustness analysis of a green chemistry-based model for the classification of silver nanoparticles synthesis processes

    EPA Science Inventory

    This paper proposes a robustness analysis based on Multiple Criteria Decision Aiding (MCDA). The ensuing model was used to assess the implementation of green chemistry principles in the synthesis of silver nanoparticles. Its recommendations were also compared to an earlier develo...

  1. Grading System and Student Effort

    ERIC Educational Resources Information Center

    Paredes, Valentina

    2017-01-01

    Several papers have proposed that the grading system affects students' incentives to exert effort. In particular, the previous literature has compared student effort under relative and absolute grading systems, but the results are mixed and the implications of the models have not been empirically tested. In this paper, I build a model where…

  2. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models.

    PubMed

    Beard, Brian B; Kainz, Wolfgang

    2004-10-13

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.

  3. Review and standardization of cell phone exposure calculations using the SAM phantom and anatomically correct head models

    PubMed Central

    Beard, Brian B; Kainz, Wolfgang

    2004-01-01

    We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601

  4. Social Security reform: evaluating current proposals. Latest results of the EBRI-SSASIM2 policy simulation model.

    PubMed

    Copeland, C; VanDerhei, J; Salisbury, D L

    1999-06-01

    The present Social Security program has been shown to be financially unsustainable in the future without modification to the current program. The purpose of this Issue Brief, EBRI's fourth in a series on Social Security reform, is threefold: to illustrate new features of the EBRI-SSASIM2 policy simulation model not available in earlier EBRI publications, to expand quantitative analysis to specific proposals, and to evaluate the uncertainty involved in proposals that rely on equity investment. This analysis compares the Gregg/Breaux-Kolbe/Stenholm (GB-KS) and Moynihan/Kerrey proposals with three generic or "traditional" reforms: increasing taxes, reducing benefits, and/or increasing the retirement age. Both proposals would create individual accounts by "carving out" funds from current Social Security payroll taxes. This analysis also examines other proposed changes that would "add on" to existing Social Security funds through the use of general revenue transfers and/or investment in the equities market. President Clinton has proposed a general revenue transfer and the collective investment of some of the OASDI trust fund assets in equities. Reps. Archer and Shaw have proposed a general revenue tax credit to establish individual accounts that would be invested partially in the equities markets. When comparing Social Security reform proposals that would specifically alter benefit levels, the Moynihan/Kerrey bill compares quite favorably with the other proposals in both benefit levels and payback ratios, when individuals elect to use the individual account option. In contrast, the GB-KS bills do not compare quite as favorably for their benefit levels, but do compare favorably in terms of payback ratios. An important comparison in these bills is the administrative costs of managing the individual accounts, since benefits can be lowered by up to 23 percent when going from the assumed low to high administrative costs. Moreover, allowing individuals to decide whether to save the 2 percent of their OASDI taxable income or to receive higher takehome pay, as would be allowed in Moynihan/Kerrey, could lead to substantial differences in ultimate retirement income. Allowing for individual investment choices and using actual 401(k) participant allocation data, as opposed to an assumed average allocation for everyone, results in substantial differences in account balances. The Archer/Shaw approach mandates a 60 percent/40 percent equity/bond split specifically to avoid the variations in returns that arise from individual investment allocation decisions. Although there are greater chances for higher returns for equity investment in the president's proposal, there are also greater chances for worse outcomes. This is also true for other reforms that would invest Social Security assets in equities.

  5. A RSSI-based parameter tracking strategy for constrained position localization

    NASA Astrophysics Data System (ADS)

    Du, Jinze; Diouris, Jean-François; Wang, Yide

    2017-12-01

    In this paper, a received signal strength indicator (RSSI)-based parameter tracking strategy for constrained position localization is proposed. To estimate channel model parameters, least mean squares method (LMS) is associated with the trilateration method. In the context of applications where the positions are constrained on a grid, a novel tracking strategy is proposed to determine the real position and obtain the actual parameters in the monitored region. Based on practical data acquired from a real localization system, an experimental channel model is constructed to provide RSSI values and verify the proposed tracking strategy. Quantitative criteria are given to guarantee the efficiency of the proposed tracking strategy by providing a trade-off between the grid resolution and parameter variation. The simulation results show a good behavior of the proposed tracking strategy in the presence of space-time variation of the propagation channel. Compared with the existing RSSI-based algorithms, the proposed tracking strategy exhibits better localization accuracy but consumes more calculation time. In addition, a tracking test is performed to validate the effectiveness of the proposed tracking strategy.

  6. Aerodynamic design and analysis of small horizontal axis wind turbine blades

    NASA Astrophysics Data System (ADS)

    Tang, Xinzi

    This work investigates the aerodynamic design and analysis of small horizontal axis wind turbine blades via the blade element momentum (BEM) based approach and the computational fluid dynamics (CFD) based approach. From this research, it is possible to draw a series of detailed guidelines on small wind turbine blade design and analysis. The research also provides a platform for further comprehensive study using these two approaches. The wake induction corrections and stall corrections of the BEM method were examined through a case study of the NREL/NASA Phase VI wind turbine. A hybrid stall correction model was proposed to analyse wind turbine power performance. The proposed model shows improvement in power prediction for the validation case, compared with the existing stall correction models. The effects of the key rotor parameters of a small wind turbine as well as the blade chord and twist angle distributions on power performance were investigated through two typical wind turbines, i.e. a fixed-pitch variable-speed (FPVS) wind turbine and a fixed-pitch fixed-speed (FPFS) wind turbine. An engineering blade design and analysis code was developed in MATLAB to accommodate aerodynamic design and analysis of the blades.. The linearisation for radial profiles of blade chord and twist angle for the FPFS wind turbine blade design was discussed. Results show that, the proposed linearisation approach leads to reduced manufacturing cost and higher annual energy production (AEP), with minimal effects on the low wind speed performance. Comparative studies of mesh and turbulence models in 2D and 3D CFD modelling were conducted. The CFD predicted lift and drag coefficients of the airfoil S809 were compared with wind tunnel test data and the 3D CFD modelling method of the NREL/NASA Phase VI wind turbine were validated against measurements. Airfoil aerodynamic characterisation and wind turbine power performance as well as 3D flow details were studied. The detailed flow characteristics from the CFD modelling are quantitatively comparable to the measurements, such as blade surface pressure distribution and integrated forces and moments. It is confirmed that the CFD approach is able to provide a more detailed qualitative and quantitative analysis for wind turbine airfoils and rotors..

  7. A novel artificial immune clonal selection classification and rule mining with swarm learning model

    NASA Astrophysics Data System (ADS)

    Al-Sheshtawi, Khaled A.; Abdul-Kader, Hatem M.; Elsisi, Ashraf B.

    2013-06-01

    Metaheuristic optimisation algorithms have become popular choice for solving complex problems. By integrating Artificial Immune clonal selection algorithm (CSA) and particle swarm optimisation (PSO) algorithm, a novel hybrid Clonal Selection Classification and Rule Mining with Swarm Learning Algorithm (CS2) is proposed. The main goal of the approach is to exploit and explore the parallel computation merit of Clonal Selection and the speed and self-organisation merits of Particle Swarm by sharing information between clonal selection population and particle swarm. Hence, we employed the advantages of PSO to improve the mutation mechanism of the artificial immune CSA and to mine classification rules within datasets. Consequently, our proposed algorithm required less training time and memory cells in comparison to other AIS algorithms. In this paper, classification rule mining has been modelled as a miltiobjective optimisation problem with predictive accuracy. The multiobjective approach is intended to allow the PSO algorithm to return an approximation to the accuracy and comprehensibility border, containing solutions that are spread across the border. We compared our proposed algorithm classification accuracy CS2 with five commonly used CSAs, namely: AIRS1, AIRS2, AIRS-Parallel, CLONALG, and CSCA using eight benchmark datasets. We also compared our proposed algorithm classification accuracy CS2 with other five methods, namely: Naïve Bayes, SVM, MLP, CART, and RFB. The results show that the proposed algorithm is comparable to the 10 studied algorithms. As a result, the hybridisation, built of CSA and PSO, can develop respective merit, compensate opponent defect, and make search-optimal effect and speed better.

  8. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    PubMed

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  9. Physiome-model-based state-space framework for cardiac deformation recovery.

    PubMed

    Wong, Ken C L; Zhang, Heye; Liu, Huafeng; Shi, Pengcheng

    2007-11-01

    To more reliably recover cardiac information from noise-corrupted, patient-specific measurements, it is essential to employ meaningful constraining models and adopt appropriate optimization criteria to couple the models with the measurements. Although biomechanical models have been extensively used for myocardial motion recovery with encouraging results, the passive nature of such constraints limits their ability to fully count for the deformation caused by active forces of the myocytes. To overcome such limitations, we propose to adopt a cardiac physiome model as the prior constraint for cardiac motion analysis. The cardiac physiome model comprises an electric wave propagation model, an electromechanical coupling model, and a biomechanical model, which are connected through a cardiac system dynamics for a more complete description of the macroscopic cardiac physiology. Embedded within a multiframe state-space framework, the uncertainties of the model and the patient's measurements are systematically dealt with to arrive at optimal cardiac kinematic estimates and possibly beyond. Experiments have been conducted to compare our proposed cardiac-physiome-model-based framework with the solely biomechanical model-based framework. The results show that our proposed framework recovers more accurate cardiac deformation from synthetic data and obtains more sensible estimates from real magnetic resonance image sequences. With the active components introduced by the cardiac physiome model, cardiac deformations recovered from patient's medical images are more physiologically plausible.

  10. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    PubMed Central

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  11. Bootstrap-after-bootstrap model averaging for reducing model uncertainty in model selection for air pollution mortality studies.

    PubMed

    Roberts, Steven; Martin, Michael A

    2010-01-01

    Concerns have been raised about findings of associations between particulate matter (PM) air pollution and mortality that have been based on a single "best" model arising from a model selection procedure, because such a strategy may ignore model uncertainty inherently involved in searching through a set of candidate models to find the best model. Model averaging has been proposed as a method of allowing for model uncertainty in this context. To propose an extension (double BOOT) to a previously described bootstrap model-averaging procedure (BOOT) for use in time series studies of the association between PM and mortality. We compared double BOOT and BOOT with Bayesian model averaging (BMA) and a standard method of model selection [standard Akaike's information criterion (AIC)]. Actual time series data from the United States are used to conduct a simulation study to compare and contrast the performance of double BOOT, BOOT, BMA, and standard AIC. Double BOOT produced estimates of the effect of PM on mortality that have had smaller root mean squared error than did those produced by BOOT, BMA, and standard AIC. This performance boost resulted from estimates produced by double BOOT having smaller variance than those produced by BOOT and BMA. Double BOOT is a viable alternative to BOOT and BMA for producing estimates of the mortality effect of PM.

  12. Evaluating Effectiveness of Modeling Motion System Feedback in the Enhanced Hess Structural Model of the Human Operator

    NASA Technical Reports Server (NTRS)

    Zaychik, Kirill; Cardullo, Frank; George, Gary; Kelly, Lon C.

    2009-01-01

    In order to use the Hess Structural Model to predict the need for certain cueing systems, George and Cardullo significantly expanded it by adding motion feedback to the model and incorporating models of the motion system dynamics, motion cueing algorithm and a vestibular system. This paper proposes a methodology to evaluate effectiveness of these innovations by performing a comparison analysis of the model performance with and without the expanded motion feedback. The proposed methodology is composed of two stages. The first stage involves fine-tuning parameters of the original Hess structural model in order to match the actual control behavior recorded during the experiments at NASA Visual Motion Simulator (VMS) facility. The parameter tuning procedure utilizes a new automated parameter identification technique, which was developed at the Man-Machine Systems Lab at SUNY Binghamton. In the second stage of the proposed methodology, an expanded motion feedback is added to the structural model. The resulting performance of the model is then compared to that of the original one. As proposed by Hess, metrics to evaluate the performance of the models include comparison against the crossover models standards imposed on the crossover frequency and phase margin of the overall man-machine system. Preliminary results indicate the advantage of having the model of the motion system and motion cueing incorporated into the model of the human operator. It is also demonstrated that the crossover frequency and the phase margin of the expanded model are well within the limits imposed by the crossover model.

  13. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.

    PubMed

    Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.

  14. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty

    PubMed Central

    Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900

  15. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  16. Monthly streamflow forecasting with auto-regressive integrated moving average

    NASA Astrophysics Data System (ADS)

    Nasir, Najah; Samsudin, Ruhaidah; Shabri, Ani

    2017-09-01

    Forecasting of streamflow is one of the many ways that can contribute to better decision making for water resource management. The auto-regressive integrated moving average (ARIMA) model was selected in this research for monthly streamflow forecasting with enhancement made by pre-processing the data using singular spectrum analysis (SSA). This study also proposed an extension of the SSA technique to include a step where clustering was performed on the eigenvector pairs before reconstruction of the time series. The monthly streamflow data of Sungai Muda at Jeniang, Sungai Muda at Jambatan Syed Omar and Sungai Ketil at Kuala Pegang was gathered from the Department of Irrigation and Drainage Malaysia. A ratio of 9:1 was used to divide the data into training and testing sets. The ARIMA, SSA-ARIMA and Clustered SSA-ARIMA models were all developed in R software. Results from the proposed model are then compared to a conventional auto-regressive integrated moving average model using the root-mean-square error and mean absolute error values. It was found that the proposed model can outperform the conventional model.

  17. Monthly hydroclimatology of the continental United States

    NASA Astrophysics Data System (ADS)

    Petersen, Thomas; Devineni, Naresh; Sankarasubramanian, A.

    2018-04-01

    Physical/semi-empirical models that do not require any calibration are of paramount need for estimating hydrological fluxes for ungauged sites. We develop semi-empirical models for estimating the mean and variance of the monthly streamflow based on Taylor Series approximation of a lumped physically based water balance model. The proposed models require mean and variance of monthly precipitation and potential evapotranspiration, co-variability of precipitation and potential evapotranspiration and regionally calibrated catchment retention sensitivity, atmospheric moisture uptake sensitivity, groundwater-partitioning factor, and the maximum soil moisture holding capacity parameters. Estimates of mean and variance of monthly streamflow using the semi-empirical equations are compared with the observed estimates for 1373 catchments in the continental United States. Analyses show that the proposed models explain the spatial variability in monthly moments for basins in lower elevations. A regionalization of parameters for each water resources region show good agreement between observed moments and model estimated moments during January, February, March and April for mean and all months except May and June for variance. Thus, the proposed relationships could be employed for understanding and estimating the monthly hydroclimatology of ungauged basins using regional parameters.

  18. A new constitutive model for simulation of softening, plateau, and densification phenomena for trabecular bone under compression.

    PubMed

    Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub

    2017-01-01

    A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A bidimensional finite mixture model for longitudinal data subject to dropout.

    PubMed

    Spagnoli, Alessandra; Marino, Maria Francesca; Alfò, Marco

    2018-06-05

    In longitudinal studies, subjects may be lost to follow up and, thus, present incomplete response sequences. When the mechanism underlying the dropout is nonignorable, we need to account for dependence between the longitudinal and the dropout process. We propose to model such a dependence through discrete latent effects, which are outcome-specific and account for heterogeneity in the univariate profiles. Dependence between profiles is introduced by using a probability matrix to describe the corresponding joint distribution. In this way, we separately model dependence within each outcome and dependence between outcomes. The major feature of this proposal, when compared with standard finite mixture models, is that it allows the nonignorable dropout model to properly nest its ignorable counterpart. We also discuss the use of an index of (local) sensitivity to nonignorability to investigate the effects that assumptions about the dropout process may have on model parameter estimates. The proposal is illustrated via the analysis of data from a longitudinal study on the dynamics of cognitive functioning in the elderly. Copyright © 2018 John Wiley & Sons, Ltd.

  20. A Hierarchical Model Predictive Tracking Control for Independent Four-Wheel Driving/Steering Vehicles with Coaxial Steering Mechanism

    NASA Astrophysics Data System (ADS)

    Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.

  1. Computational Modeling of 3D Tumor Growth and Angiogenesis for Chemotherapy Evaluation

    PubMed Central

    Tang, Lei; van de Ven, Anne L.; Guo, Dongmin; Andasari, Vivi; Cristini, Vittorio; Li, King C.; Zhou, Xiaobo

    2014-01-01

    Solid tumors develop abnormally at spatial and temporal scales, giving rise to biophysical barriers that impact anti-tumor chemotherapy. This may increase the expenditure and time for conventional drug pharmacokinetic and pharmacodynamic studies. In order to facilitate drug discovery, we propose a mathematical model that couples three-dimensional tumor growth and angiogenesis to simulate tumor progression for chemotherapy evaluation. This application-oriented model incorporates complex dynamical processes including cell- and vascular-mediated interstitial pressure, mass transport, angiogenesis, cell proliferation, and vessel maturation to model tumor progression through multiple stages including tumor initiation, avascular growth, and transition from avascular to vascular growth. Compared to pure mechanistic models, the proposed empirical methods are not only easy to conduct but can provide realistic predictions and calculations. A series of computational simulations were conducted to demonstrate the advantages of the proposed comprehensive model. The computational simulation results suggest that solid tumor geometry is related to the interstitial pressure, such that tumors with high interstitial pressure are more likely to develop dendritic structures than those with low interstitial pressure. PMID:24404145

  2. Comparing Families of Dynamic Causal Models

    PubMed Central

    Penny, Will D.; Stephan, Klaas E.; Daunizeau, Jean; Rosa, Maria J.; Friston, Karl J.; Schofield, Thomas M.; Leff, Alex P.

    2010-01-01

    Mathematical models of scientific data can be formally compared using Bayesian model evidence. Previous applications in the biological sciences have mainly focussed on model selection in which one first selects the model with the highest evidence and then makes inferences based on the parameters of that model. This “best model” approach is very useful but can become brittle if there are a large number of models to compare, and if different subjects use different models. To overcome this shortcoming we propose the combination of two further approaches: (i) family level inference and (ii) Bayesian model averaging within families. Family level inference removes uncertainty about aspects of model structure other than the characteristic of interest. For example: What are the inputs to the system? Is processing serial or parallel? Is it linear or nonlinear? Is it mediated by a single, crucial connection? We apply Bayesian model averaging within families to provide inferences about parameters that are independent of further assumptions about model structure. We illustrate the methods using Dynamic Causal Models of brain imaging data. PMID:20300649

  3. Bayesian Image Segmentations by Potts Prior and Loopy Belief Propagation

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuyuki; Kataoka, Shun; Yasuda, Muneki; Waizumi, Yuji; Hsu, Chiou-Ting

    2014-12-01

    This paper presents a Bayesian image segmentation model based on Potts prior and loopy belief propagation. The proposed Bayesian model involves several terms, including the pairwise interactions of Potts models, and the average vectors and covariant matrices of Gauss distributions in color image modeling. These terms are often referred to as hyperparameters in statistical machine learning theory. In order to determine these hyperparameters, we propose a new scheme for hyperparameter estimation based on conditional maximization of entropy in the Potts prior. The algorithm is given based on loopy belief propagation. In addition, we compare our conditional maximum entropy framework with the conventional maximum likelihood framework, and also clarify how the first order phase transitions in loopy belief propagations for Potts models influence our hyperparameter estimation procedures.

  4. Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.

    PubMed

    Zhang, Jiachao; Hirakawa, Keigo

    2017-04-01

    This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.

  5. A hybrid group method of data handling with discrete wavelet transform for GDP forecasting

    NASA Astrophysics Data System (ADS)

    Isa, Nadira Mohamed; Shabri, Ani

    2013-09-01

    This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.

  6. Mixed-effects location and scale Tobit joint models for heterogeneous longitudinal data with skewness, detection limits, and measurement errors.

    PubMed

    Lu, Tao

    2017-01-01

    The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.

  7. Robust optimization model and algorithm for railway freight center location problem in uncertain environment.

    PubMed

    Liu, Xing-Cai; He, Shi-Wei; Song, Rui; Sun, Yang; Li, Hao-Dong

    2014-01-01

    Railway freight center location problem is an important issue in railway freight transport programming. This paper focuses on the railway freight center location problem in uncertain environment. Seeing that the expected value model ignores the negative influence of disadvantageous scenarios, a robust optimization model was proposed. The robust optimization model takes expected cost and deviation value of the scenarios as the objective. A cloud adaptive clonal selection algorithm (C-ACSA) was presented. It combines adaptive clonal selection algorithm with Cloud Model which can improve the convergence rate. Design of the code and progress of the algorithm were proposed. Result of the example demonstrates the model and algorithm are effective. Compared with the expected value cases, the amount of disadvantageous scenarios in robust model reduces from 163 to 21, which prove the result of robust model is more reliable.

  8. Comparative-effectiveness research to aid population decision making by relating clinical outcomes and quality-adjusted life years.

    PubMed

    Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M

    2013-04-01

    Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  9. A new empirical solar radiation pressure model for BeiDou GEO satellites

    NASA Astrophysics Data System (ADS)

    Liu, Junhong; Gu, Defeng; Ju, Bing; Shen, Zhen; Lai, Yuwang; Yi, Dongyun

    2016-01-01

    Two classic empirical solar radiation pressure (SRP) models, the Extended Center for Orbit Determination in Europe (CODE) Orbit Model ECOM 5 and ECOM 9 have been widely used for Global Positioning System (GPS) Medium Earth Orbit (MEO) satellites precise orbit determination (POD). However, these two models are not suitable for BeiDou Geostationary Earth Orbit (GEO) satellites due to their special attitude control mode. With the experimental design method this paper proposes a new empirical SRP model for BeiDou GEO satellites, which is featured by three constant terms in DYX directions, two sine terms in DX directions and one cosine term in the Y direction. It is the first time to reveal that the periodic terms in the D direction are more important than those in YX directions for BeiDou GEO satellites. Compared with ECOM 5 and ECOM 9, the BeiDou GEO satellite orbits are significantly stabilized with the new SRP force model. The average orbit overlapping root mean square (RMS) achieved by the proposed model is 7.5 cm in the radial component, which is evidently improved over those of 37.4 and 13.2 cm for ECOM 5 and ECOM 9, respectively. In addition, the correlation coefficients between GEO orbit overlaps precision and the elevation angle of the Sun have been decreased to -0.12, 0.21, and -0.03 in radial, along-track and cross-track components by using the proposed model, while they are -0.94, -0.79 and -0.29 for ECOM 5 and -0.70, 0.21 and 0.10 for ECOM 9. Moreover, the standard deviation (STD) of Satellite Laser Ranging (SLR) data residuals for the GEO satellite C01 is reduced by 37.4% and 16.1% compared with those of ECOM 5 and ECOM 9 SRP models.

  10. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  11. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  12. A general method for the inclusion of radiation chemistry in astrochemical models.

    PubMed

    Shingledecker, Christopher N; Herbst, Eric

    2018-02-21

    In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.

  13. An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles

    NASA Astrophysics Data System (ADS)

    Ni, Zao; Su, Tsung-chow; Dhanak, Manhar

    2018-04-01

    Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.

  14. The Modeling and Simulation of the Galvanic Coupling Intra-Body Communication via Handshake Channel.

    PubMed

    Li, Maoyuan; Song, Yong; Li, Wansong; Wang, Guangfa; Bu, Tianpeng; Zhao, Yufei; Hao, Qun

    2017-04-14

    Intra-body communication (IBC) is a technology using the conductive properties of the body to transmit signal, and information interaction by handshake is regarded as one of the important applications of IBC. In this paper, a method for modeling the galvanic coupling intra-body communication via handshake channel is proposed, while the corresponding parameters are discussed. Meanwhile, the mathematical model of this kind of IBC is developed. Finally, the validity of the developed model has been verified by measurements. Moreover, its characteristics are discussed and compared with that of the IBC via single body channel. Our results indicate that the proposed method will lay a foundation for the theoretical analysis and application of the IBC via handshake channel.

  15. The Modeling and Simulation of the Galvanic Coupling Intra-Body Communication via Handshake Channel

    PubMed Central

    Li, Maoyuan; Song, Yong; Li, Wansong; Wang, Guangfa; Bu, Tianpeng; Zhao, Yufei; Hao, Qun

    2017-01-01

    Intra-body communication (IBC) is a technology using the conductive properties of the body to transmit signal, and information interaction by handshake is regarded as one of the important applications of IBC. In this paper, a method for modeling the galvanic coupling intra-body communication via handshake channel is proposed, while the corresponding parameters are discussed. Meanwhile, the mathematical model of this kind of IBC is developed. Finally, the validity of the developed model has been verified by measurements. Moreover, its characteristics are discussed and compared with that of the IBC via single body channel. Our results indicate that the proposed method will lay a foundation for the theoretical analysis and application of the IBC via handshake channel. PMID:28420119

  16. Can Coolness Predict Technology Adoption? Effects of Perceived Coolness on User Acceptance of Smartphones with Curved Screens.

    PubMed

    Kim, Ki Joon; Shin, Dong-Hee; Park, Eunil

    2015-09-01

    This study proposes an acceptance model for curved-screen smartphones, and explores how the sense of coolness induced by attractiveness, originality, subcultural appeal, and the utility of the curved screen promotes smartphone adoption. The results of structural equation modeling analyses (N = 246) show that these components of coolness (except utility) increase the acceptance of the technology by enhancing the smartphones' affectively driven qualities rather than their utilitarian ones. The proposed coolness model is then compared with the original technology acceptance model to validate that the coolness factors are indeed equally effective determinants of usage intention, as are the extensively studied usability factors such as perceived ease of use and usefulness.

  17. Oxidation stress evolution and relaxation of oxide film/metal substrate system

    NASA Astrophysics Data System (ADS)

    Dong, Xuelin; Feng, Xue; Hwang, Keh-Chih

    2012-07-01

    Stresses in the oxide film/metal substrate system are crucial to the reliability of the system at high temperature. Two models for predicting the stress evolution during isothermal oxidation are proposed. The deformation of the system is depicted by the curvature for single surface oxidation. The creep strain of the oxide and metal, and the lateral growth strain of the oxide are considered. The proposed models are compared with the experimental results in literature, which demonstrates that the elastic model only considering for elastic strain gives an overestimated stress in magnitude, but the creep model is consistent with the experimental data and captures the stress relaxation phenomenon during oxidation. The effects of the parameter for the lateral growth strain rate are also analyzed.

  18. Modified dwell time optimization model and its applications in subaperture polishing.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-05-20

    The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200  mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920  mm Zerodur paraboloid and Φ100  mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8  nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.

  19. A Novel RSSI Prediction Using Imperialist Competition Algorithm (ICA), Radial Basis Function (RBF) and Firefly Algorithm (FFA) in Wireless Networks.

    PubMed

    Goudarzi, Shidrokh; Haslina Hassan, Wan; Abdalla Hashim, Aisha-Hassan; Soleymani, Seyed Ahmad; Anisi, Mohammad Hossein; Zakaria, Omar M

    2016-01-01

    This study aims to design a vertical handover prediction method to minimize unnecessary handovers for a mobile node (MN) during the vertical handover process. This relies on a novel method for the prediction of a received signal strength indicator (RSSI) referred to as IRBF-FFA, which is designed by utilizing the imperialist competition algorithm (ICA) to train the radial basis function (RBF), and by hybridizing with the firefly algorithm (FFA) to predict the optimal solution. The prediction accuracy of the proposed IRBF-FFA model was validated by comparing it to support vector machines (SVMs) and multilayer perceptron (MLP) models. In order to assess the model's performance, we measured the coefficient of determination (R2), correlation coefficient (r), root mean square error (RMSE) and mean absolute percentage error (MAPE). The achieved results indicate that the IRBF-FFA model provides more precise predictions compared to different ANNs, namely, support vector machines (SVMs) and multilayer perceptron (MLP). The performance of the proposed model is analyzed through simulated and real-time RSSI measurements. The results also suggest that the IRBF-FFA model can be applied as an efficient technique for the accurate prediction of vertical handover.

  20. Tracking boundary movement and exterior shape modelling in lung EIT imaging.

    PubMed

    Biguri, A; Grychtol, B; Adler, A; Soleimani, M

    2015-06-01

    Electrical impedance tomography (EIT) has shown significant promise for lung imaging. One key challenge for EIT in this application is the movement of electrodes during breathing, which introduces artefacts in reconstructed images. Various approaches have been proposed to compensate for electrode movement, but no comparison of these approaches is available. This paper analyses boundary model mismatch and electrode movement in lung EIT. The aim is to evaluate the extent to which various algorithms tolerate movement, and to determine if a patient specific model is required for EIT lung imaging. Movement data are simulated from a CT-based model, and image analysis is performed using quantitative figures of merit. The electrode movement is modelled based on expected values of chest movement and an extended Jacobian method is proposed to make use of exterior boundary tracking. Results show that a dynamical boundary tracking is the most robust method against any movement, but is computationally more expensive. Simultaneous electrode movement and conductivity reconstruction algorithms show increased robustness compared to only conductivity reconstruction. The results of this comparative study can help develop a better understanding of the impact of shape model mismatch and electrode movement in lung EIT.

  1. A mean-density model of ionic surfactants for the dispersion of carbon nanotubes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Joung, Young Soo

    2018-05-01

    We propose a new analytical model of ionic surfactants used for the dispersion of carbon nanotubes (CNTs) in aqueous solutions. Although ionic surfactants are commonly used to facilitate the dispersion of CNTs in aqueous solutions, understanding the dispersion process is challenging and time-consuming owing to its complexity and nonlinearity. In this work, we develop a mean-density model of ionic surfactants to simplify the calculation of interaction forces between CNTs stabilized by ionic surfactants. Using this model, we can evaluate various interaction forces between the CNTs and ionic surfactants under different conditions. The dispersion mechanism is investigated by estimating the potential of mean force (PMF) as a function of van der Waals forces, electrostatic forces, interfacial tension, and osmotic pressure. To verify the proposed model, we compare the PMFs derived using our method with those derived from molecular dynamics simulations using comparable CNTs and ionic surfactants. Notably, for stable dispersions, the osmotic pressure and interfacial energy are important for long-range and short-range interactions, respectively, in comparison with the effect of electrostatic forces. Our model effectively prescribes specific surfactants and their concentrations to achieve stable aqueous suspensions of CNTs.

  2. Economic tour package model using heuristic

    NASA Astrophysics Data System (ADS)

    Rahman, Syariza Abdul; Benjamin, Aida Mauziah; Bakar, Engku Muhammad Nazri Engku Abu

    2014-07-01

    A tour-package is a prearranged tour that includes products and services such as food, activities, accommodation, and transportation, which are sold at a single price. Since the competitiveness within tourism industry is very high, many of the tour agents try to provide attractive tour-packages in order to meet tourist satisfaction as much as possible. Some of the criteria that are considered by the tourist are the number of places to be visited and the cost of the tour-packages. Previous studies indicate that tourists tend to choose economical tour-packages and aiming to visit as many places as they can cover. Thus, this study proposed tour-package model using heuristic approach. The aim is to find economical tour-packages and at the same time to propose as many places as possible to be visited by tourist in a given geographical area particularly in Langkawi Island. The proposed model considers only one starting point where the tour starts and ends at an identified hotel. This study covers 31 most attractive places in Langkawi Island from various categories of tourist attractions. Besides, the allocation of period for lunch and dinner are included in the proposed itineraries where it covers 11 popular restaurants around Langkawi Island. In developing the itinerary, the proposed heuristic approach considers time window for each site (hotel/restaurant/place) so that it represents real world implementation. We present three itineraries with different time constraints (1-day, 2-day and 3-day tour-package). The aim of economic model is to minimize the tour-package cost as much as possible by considering entrance fee of each visited place. We compare the proposed model with our uneconomic model from our previous study. The uneconomic model has no limitation to the cost with the aim to maximize the number of places to be visited. Comparison between the uneconomic and economic itinerary has shown that the proposed model have successfully achieved the objective that minimize the tour cost and cover maximum number of places to be visited.

  3. Right-Sizing Statistical Models for Longitudinal Data

    PubMed Central

    Wood, Phillip K.; Steinley, Douglas; Jackson, Kristina M.

    2015-01-01

    Arguments are proposed that researchers using longitudinal data should consider more and less complex statistical model alternatives to their initially chosen techniques in an effort to “right-size” the model to the data at hand. Such model comparisons may alert researchers who use poorly fitting overly parsimonious models to more complex better fitting alternatives, and, alternatively, may identify more parsimonious alternatives to overly complex (and perhaps empirically under-identified and/or less powerful) statistical models. A general framework is proposed for considering (often nested) relationships between a variety of psychometric and growth curve models. A three-step approach is proposed in which models are evaluated based on the number and patterning of variance components prior to selection of better-fitting growth models that explain both mean and variation/covariation patterns. The orthogonal, free-curve slope-intercept (FCSI) growth model is considered as a general model which includes, as special cases, many models including the Factor Mean model (FM, McArdle & Epstein, 1987), McDonald's (1967) linearly constrained factor model, Hierarchical Linear Models (HLM), Repeated Measures MANOVA, and the Linear Slope Intercept (LinearSI) Growth Model. The FCSI model, in turn, is nested within the Tuckerized factor model. The approach is illustrated by comparing alternative models in a longitudinal study of children's vocabulary and by comparison of several candidate parametric growth and chronometric models in a Monte Carlo study. PMID:26237507

  4. Maintenance of genetic variation with a frequency-dependent selection model as compared to the overdominant model.

    PubMed

    Hedrick, P W

    1972-12-01

    A frequency-dependent selection model proposed by Huang, Singh and Kojima (1971) was found to be more effective at maintaining genetic variation in a finite population than the overdominant model. The fourth moment parameter of the distribution of unfixed states showed that there was a more platykurtic distribution for the frequency-dependent model. This agreed well with the expected gene frequency change found for an infinite population.

  5. [Public health conceptual models and paradigms].

    PubMed

    Hernández-Girón, Carlos; Orozco-Núñez, Emanuel; Arredondo-López, Armando

    2012-01-01

    The epidemiological transition model proposed by Omhran at the beginning of the 1970s (decreased fecundity rate and increased life expectancy), together with modifications in lifestyles and diet, showed increased mortality due to chronically degenerative causes. This essay thus discusses and makes a comparative analysis of some currents of thought, taking as its common thread an analysis of epidemiological change identified in different eras or stages and relationships with some public health models or conceptual frameworks. Discussing public health paradigms leads to a historical recapitulation of conceptual models ranging from magical-religious conceptions to ecological and socio-medical models. M. Susser proposed 3 eras in this discipline's evolution in his speech on the future of the epidemiology. The epidemiological changes analysed through different approaches constitute elements of analysis that all models discussed in this essay include to delimit the contributions and variables so determining them.

  6. Real-time deformations of organ based on structural mechanics for surgical simulators

    NASA Astrophysics Data System (ADS)

    Nakaguchi, Toshiya; Tagaya, Masashi; Tamura, Nobuhiko; Tsumura, Norimichi; Miyake, Yoichi

    2006-03-01

    This research proposes the deformation model of organs for the development of the medical training system using Virtual Reality (VR) technology. First, the proposed model calculates the strains of coordinate axis. Secondly, the deformation is obtained by mapping the coordinate of the object to the strained coordinate. We assume the beams in the coordinate space to calculate the strain of the coordinate axis. The forces acting on the object are converted to the forces applied to the beams. The bend and the twist of the beams are calculated based on the theory of structural mechanics. The bend is derived by the finite element method. We propose two deformation methods which differ in the position of the beams in the coordinate space. One method locates the beams along the three orthogonal axes (x, y, z). Another method locates the beam in the area where the deformation is large. In addition, the strain of the coordinate axis is attenuated in proportion to the distance from the point of action to consider the attenuation of the stress which is a viscoelastic feature of the organs. The proposed model needs less computational cost compared to the conventional deformation method since our model does not need to divide the object into the elasticity element. The proposed model was implemented in the laparoscopic surgery training system, and a real-time deformation can be realized.

  7. A fuzzy rumor spreading model based on transmission capacity

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Xu, Jiuping; Wu, Yue

    This paper proposes a rumor spreading model that considers three main factors: the event importance, event ambiguity, and the publics critical sense, each of which are defined by decision makers using linguistic descriptions and then transformed into triangular fuzzy numbers. To calculate the resultant force of these three factors, the transmission capacity and a new parameter category with fuzzy variables are determined. A rumor spreading model is then proposed which has fuzzy parameters rather than the fixed parameters in traditional models. As the proposed model considers the comprehensive factors affecting rumors from three aspects rather than examining special factors from a particular aspect. The proposed rumor spreading model is tested using different parameters for several different conditions on BA networks and three special cases are simulated. The simulation results for all three cases suggested that events of low importance, those that are only clarifying facts, and those that are strongly critical do not result in rumors. Therefore, the model assessment results were proven to be in agreement with reality. Parameters for the model were then determined and applied to an analysis of the 7.23 Yong-Wen line major transportation accident (YWMTA). When the simulated data were compared with the real data from this accident, the results demonstrated that the interval for the rumor spreading key point in the model was accurate, and that the key point for the YWMTA rumor spread fell into the range estimated by the model.

  8. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  9. Cache-Oblivious parallel SIMD Viterbi decoding for sequence search in HMMER.

    PubMed

    Ferreira, Miguel; Roma, Nuno; Russo, Luis M S

    2014-05-30

    HMMER is a commonly used bioinformatics tool based on Hidden Markov Models (HMMs) to analyze and process biological sequences. One of its main homology engines is based on the Viterbi decoding algorithm, which was already highly parallelized and optimized using Farrar's striped processing pattern with Intel SSE2 instruction set extension. A new SIMD vectorization of the Viterbi decoding algorithm is proposed, based on an SSE2 inter-task parallelization approach similar to the DNA alignment algorithm proposed by Rognes. Besides this alternative vectorization scheme, the proposed implementation also introduces a new partitioning of the Markov model that allows a significantly more efficient exploitation of the cache locality. Such optimization, together with an improved loading of the emission scores, allows the achievement of a constant processing throughput, regardless of the innermost-cache size and of the dimension of the considered model. The proposed optimized vectorization of the Viterbi decoding algorithm was extensively evaluated and compared with the HMMER3 decoder to process DNA and protein datasets, proving to be a rather competitive alternative implementation. Being always faster than the already highly optimized ViterbiFilter implementation of HMMER3, the proposed Cache-Oblivious Parallel SIMD Viterbi (COPS) implementation provides a constant throughput and offers a processing speedup as high as two times faster, depending on the model's size.

  10. Modeling and Validation of the Three Dimensional Deflection of an MRI-Compatible Magnetically-Actuated Steerable Catheter

    PubMed Central

    Liu, Taoming; Poirot, Nate Lombard; Franson, Dominique; Seiberlich, Nicole; Griswold, Mark A.; Çavuşoğlu, M. Cenk

    2016-01-01

    Objective This paper presents the three dimensional kinematic modeling of a novel steerable robotic ablation catheter system. The catheter, embedded with a set of current-carrying micro-coils, is actuated by the magnetic forces generated by the magnetic field of the magnetic resonance imaging (MRI) scanner. Methods This paper develops a 3D model of the MRI actuated steerable catheter system by using finite differences approach. For each finite segment, a quasi-static torque-deflection equilibrium equation is calculated using beam theory. By using the deflection displacements and torsion angles, the kinematic model of the catheter system is derived. Results The proposed models are validated by comparing the simulation results of the proposed model with the experimental results of a hardware prototype of the catheter design. The maximum tip deflection error is 4.70 mm and the maximum root-mean-square (RMS) error of the shape estimation is 3.48 mm. Conclusion The results demonstrate that the proposed model can successfully estimate the deflection motion of the catheter. Significance The presented three dimensional deflection model of the magnetically controlled catheter design paves the way to efficient control of the robotic catheter for treatment of atrial fibrillation. PMID:26731519

  11. Well test mathematical model for fractures network in tight oil reservoirs

    NASA Astrophysics Data System (ADS)

    Diwu, Pengxiang; Liu, Tongjing; Jiang, Baoyi; Wang, Rui; Yang, Peidie; Yang, Jiping; Wang, Zhaoming

    2018-02-01

    Well test, especially build-up test, has been applied widely in the development of tight oil reservoirs, since it is the only available low cost way to directly quantify flow ability and formation heterogeneity parameters. However, because of the fractures network near wellbore, generated from artificial fracturing linking up natural factures, traditional infinite and finite conductivity fracture models usually result in significantly deviation in field application. In this work, considering the random distribution of natural fractures, physical model of fractures network is proposed, and it shows a composite model feature in the large scale. Consequently, a nonhomogeneous composite mathematical model is established with threshold pressure gradient. To solve this model semi-analytically, we proposed a solution approach including Laplace transform and virtual argument Bessel function, and this method is verified by comparing with existing analytical solution. The matching data of typical type curves generated from semi-analytical solution indicates that the proposed physical and mathematical model can describe the type curves characteristic in typical tight oil reservoirs, which have up warping in late-term rather than parallel lines with slope 1/2 or 1/4. It means the composite model could be used into pressure interpretation of artificial fracturing wells in tight oil reservoir.

  12. The sound of friction: Real-time models, playability and musical applications

    NASA Astrophysics Data System (ADS)

    Serafin, Stefania

    Friction, the tangential force between objects in contact, in most engineering applications needs to be removed as a source of noise and instabilities. In musical applications, friction is a desirable component, being the sound production mechanism of different musical instruments such as bowed strings, musical saws, rubbed bowls and any other sonority produced by interactions between rubbed dry surfaces. The goal of the dissertation is to simulate different instrument whose main excitation mechanism is friction. An efficient yet accurate model of a bowed string instrument, which combines the latest results in violin acoustics with the efficient digital waveguide approach, is provided. In particular, the bowed string physical model proposed uses a thermodynamic friction model in which the finite width of the bow is taken into account; this solution is compared to the recently developed elasto-plastic friction models used in haptics and robotics. Different solutions are also proposed to model the body of the instrument. Other less common instruments driven by friction are also proposed, and the elasto-plastic model is used to provide audio-visual simulations of everyday friction sounds such as squeaking doors and rubbed wine glasses. Finally, playability evaluations and musical applications in which the models have been used are discussed.

  13. Scatter and crosstalk corrections for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging using a CZT SPECT system with pinhole collimators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fan, Peng; Hutton, Brian F.; Holstensson, Maria

    2015-12-15

    Purpose: The energy spectrum for a cadmium zinc telluride (CZT) detector has a low energy tail due to incomplete charge collection and intercrystal scattering. Due to these solid-state detector effects, scatter would be overestimated if the conventional triple-energy window (TEW) method is used for scatter and crosstalk corrections in CZT-based imaging systems. The objective of this work is to develop a scatter and crosstalk correction method for {sup 99m}Tc/{sup 123}I dual-radionuclide imaging for a CZT-based dedicated cardiac SPECT system with pinhole collimators (GE Discovery NM 530c/570c). Methods: A tailing model was developed to account for the low energy tail effectsmore » of the CZT detector. The parameters of the model were obtained using {sup 99m}Tc and {sup 123}I point source measurements. A scatter model was defined to characterize the relationship between down-scatter and self-scatter projections. The parameters for this model were obtained from Monte Carlo simulation using SIMIND. The tailing and scatter models were further incorporated into a projection count model, and the primary and self-scatter projections of each radionuclide were determined with a maximum likelihood expectation maximization (MLEM) iterative estimation approach. The extracted scatter and crosstalk projections were then incorporated into MLEM image reconstruction as an additive term in forward projection to obtain scatter- and crosstalk-corrected images. The proposed method was validated using Monte Carlo simulation, line source experiment, anthropomorphic torso phantom studies, and patient studies. The performance of the proposed method was also compared to that obtained with the conventional TEW method. Results: Monte Carlo simulations and line source experiment demonstrated that the TEW method overestimated scatter while their proposed method provided more accurate scatter estimation by considering the low energy tail effect. In the phantom study, improved defect contrasts were observed with both correction methods compared to no correction, especially for the images of {sup 99m}Tc in dual-radionuclide imaging where there is heavy contamination from {sup 123}I. In this case, the nontransmural defect contrast was improved from 0.39 to 0.47 with the TEW method and to 0.51 with their proposed method and the transmural defect contrast was improved from 0.62 to 0.74 with the TEW method and to 0.73 with their proposed method. In the patient study, the proposed method provided higher myocardium-to-blood pool contrast than that of the TEW method. Similar to the phantom experiment, the improvement was the most substantial for the images of {sup 99m}Tc in dual-radionuclide imaging. In this case, the myocardium-to-blood pool ratio was improved from 7.0 to 38.3 with the TEW method and to 63.6 with their proposed method. Compared to the TEW method, the proposed method also provided higher count levels in the reconstructed images in both phantom and patient studies, indicating reduced overestimation of scatter. Using the proposed method, consistent reconstruction results were obtained for both single-radionuclide data with scatter correction and dual-radionuclide data with scatter and crosstalk corrections, in both phantom and human studies. Conclusions: The authors demonstrate that the TEW method leads to overestimation in scatter and crosstalk for the CZT-based imaging system while the proposed scatter and crosstalk correction method can provide more accurate self-scatter and down-scatter estimations for quantitative single-radionuclide and dual-radionuclide imaging.« less

  14. Regional Lung Ventilation Analysis Using Temporally Resolved Magnetic Resonance Imaging.

    PubMed

    Kolb, Christoph; Wetscherek, Andreas; Buzan, Maria Teodora; Werner, René; Rank, Christopher M; Kachelrie, Marc; Kreuter, Michael; Dinkel, Julien; Heuel, Claus Peter; Maier-Hein, Klaus

    We propose a computer-aided method for regional ventilation analysis and observation of lung diseases in temporally resolved magnetic resonance imaging (4D MRI). A shape model-based segmentation and registration workflow was used to create an atlas-derived reference system in which regional tissue motion can be quantified and multimodal image data can be compared regionally. Model-based temporal registration of the lung surfaces in 4D MRI data was compared with the registration of 4D computed tomography (CT) images. A ventilation analysis was performed on 4D MR images of patients with lung fibrosis; 4D MR ventilation maps were compared with corresponding diagnostic 3D CT images of the patients and 4D CT maps of subjects without impaired lung function (serving as reference). Comparison between the computed patient-specific 4D MR regional ventilation maps and diagnostic CT images shows good correlation in conspicuous regions. Comparison to 4D CT-derived ventilation maps supports the plausibility of the 4D MR maps. Dynamic MRI-based flow-volume loops and spirograms further visualize the free-breathing behavior. The proposed methods allow for 4D MR-based regional analysis of tissue dynamics and ventilation in spontaneous breathing and comparison of patient data. The proposed atlas-based reference coordinate system provides an automated manner of annotating and comparing multimodal lung image data.

  15. A new cooperative MIMO scheme based on SM for energy-efficiency improvement in wireless sensor network.

    PubMed

    Peng, Yuyang; Choi, Jaeho

    2014-01-01

    Improving the energy efficiency in wireless sensor networks (WSN) has attracted considerable attention nowadays. The multiple-input multiple-output (MIMO) technique has been proved as a good candidate for improving the energy efficiency, but it may not be feasible in WSN which is due to the size limitation of the sensor node. As a solution, the cooperative multiple-input multiple-output (CMIMO) technique overcomes this constraint and shows a dramatically good performance. In this paper, a new CMIMO scheme based on the spatial modulation (SM) technique named CMIMO-SM is proposed for energy-efficiency improvement. We first establish the system model of CMIMO-SM. Based on this model, the transmission approach is introduced graphically. In order to evaluate the performance of the proposed scheme, a detailed analysis in terms of energy consumption per bit of the proposed scheme compared with the conventional CMIMO is presented. Later, under the guide of this new scheme we extend our proposed CMIMO-SM to a multihop clustered WSN for further achieving energy efficiency by finding an optimal hop-length. Equidistant hop as the traditional scheme will be compared in this paper. Results from the simulations and numerical experiments indicate that by the use of the proposed scheme, significant savings in terms of total energy consumption can be achieved. Combining the proposed scheme with monitoring sensor node will provide a good performance in arbitrary deployed WSN such as forest fire detection system.

  16. Constructing and predicting solitary pattern solutions for nonlinear time-fractional dispersive partial differential equations

    NASA Astrophysics Data System (ADS)

    Arqub, Omar Abu; El-Ajou, Ahmad; Momani, Shaher

    2015-07-01

    Building fractional mathematical models for specific phenomena and developing numerical or analytical solutions for these fractional mathematical models are crucial issues in mathematics, physics, and engineering. In this work, a new analytical technique for constructing and predicting solitary pattern solutions of time-fractional dispersive partial differential equations is proposed based on the generalized Taylor series formula and residual error function. The new approach provides solutions in the form of a rapidly convergent series with easily computable components using symbolic computation software. For method evaluation and validation, the proposed technique was applied to three different models and compared with some of the well-known methods. The resultant simulations clearly demonstrate the superiority and potentiality of the proposed technique in terms of the quality performance and accuracy of substructure preservation in the construct, as well as the prediction of solitary pattern solutions for time-fractional dispersive partial differential equations.

  17. Image segmentation on adaptive edge-preserving smoothing

    NASA Astrophysics Data System (ADS)

    He, Kun; Wang, Dan; Zheng, Xiuqing

    2016-09-01

    Nowadays, typical active contour models are widely applied in image segmentation. However, they perform badly on real images with inhomogeneous subregions. In order to overcome the drawback, this paper proposes an edge-preserving smoothing image segmentation algorithm. At first, this paper analyzes the edge-preserving smoothing conditions for image segmentation and constructs an edge-preserving smoothing model inspired by total variation. The proposed model has the ability to smooth inhomogeneous subregions and preserve edges. Then, a kind of clustering algorithm, which reasonably trades off edge-preserving and subregion-smoothing according to the local information, is employed to learn the edge-preserving parameter adaptively. At last, according to the confidence level of segmentation subregions, this paper constructs a smoothing convergence condition to avoid oversmoothing. Experiments indicate that the proposed algorithm has superior performance in precision, recall, and F-measure compared with other segmentation algorithms, and it is insensitive to noise and inhomogeneous-regions.

  18. Fuzzy observer-based control for maximum power-point tracking of a photovoltaic system

    NASA Astrophysics Data System (ADS)

    Allouche, M.; Dahech, K.; Chaabane, M.; Mehdi, D.

    2018-04-01

    This paper presents a novel fuzzy control design method for maximum power-point tracking (MPPT) via a Takagi and Sugeno (TS) fuzzy model-based approach. A knowledge-dynamic model of the PV system is first developed leading to a TS representation by a simple convex polytopic transformation. Then, based on this exact fuzzy representation, a H∞ observer-based fuzzy controller is proposed to achieve MPPT even when we consider varying climatic conditions. A specified TS reference model is designed to generate the optimum trajectory which must be tracked to ensure maximum power operation. The controller and observer gains are obtained in a one-step procedure by solving a set of linear matrix inequalities (LMIs). The proposed method has been compared with some classical MPPT techniques taking into account convergence speed and tracking accuracy. Finally, various simulation and experimental tests have been carried out to illustrate the effectiveness of the proposed TS fuzzy MPPT strategy.

  19. A study of photon propagation in free-space based on hybrid radiosity-radiance theorem.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Liang, Jimin; Wang, Lin; Yang, Da'an; Garofalakis, Anikitos; Ripoll, Jorge; Tian, Jie

    2009-08-31

    Noncontact optical imaging has attracted increasing attention in recent years due to its significant advantages on detection sensitivity, spatial resolution, image quality and system simplicity compared with contact measurement. However, photon transport simulation in free-space is still an extremely challenging topic for the complexity of the optical system. For this purpose, this paper proposes an analytical model for photon propagation in free-space based on hybrid radiosity-radiance theorem (HRRT). It combines Lambert's cosine law and the radiance theorem to handle the influence of the complicated lens and to simplify the photon transport process in the optical system. The performance of the proposed model is evaluated and validated with numerical simulations and physical experiments. Qualitative comparison results of flux distribution at the detector are presented. In particular, error analysis demonstrates the feasibility and potential of the proposed model for simulating photon propagation in free-space.

  20. Autoregressive statistical pattern recognition algorithms for damage detection in civil structures

    NASA Astrophysics Data System (ADS)

    Yao, Ruigen; Pakzad, Shamim N.

    2012-08-01

    Statistical pattern recognition has recently emerged as a promising set of complementary methods to system identification for automatic structural damage assessment. Its essence is to use well-known concepts in statistics for boundary definition of different pattern classes, such as those for damaged and undamaged structures. In this paper, several statistical pattern recognition algorithms using autoregressive models, including statistical control charts and hypothesis testing, are reviewed as potentially competitive damage detection techniques. To enhance the performance of statistical methods, new feature extraction techniques using model spectra and residual autocorrelation, together with resampling-based threshold construction methods, are proposed. Subsequently, simulated acceleration data from a multi degree-of-freedom system is generated to test and compare the efficiency of the existing and proposed algorithms. Data from laboratory experiments conducted on a truss and a large-scale bridge slab model are then used to further validate the damage detection methods and demonstrate the superior performance of proposed algorithms.

Top