Sample records for prediction methodology based

  1. A micromechanics-based strength prediction methodology for notched metal matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1992-01-01

    An analytical micromechanics based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and post fatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  2. A micromechanics-based strength prediction methodology for notched metal-matrix composites

    NASA Technical Reports Server (NTRS)

    Bigelow, C. A.

    1993-01-01

    An analytical micromechanics-based strength prediction methodology was developed to predict failure of notched metal matrix composites. The stress-strain behavior and notched strength of two metal matrix composites, boron/aluminum (B/Al) and silicon-carbide/titanium (SCS-6/Ti-15-3), were predicted. The prediction methodology combines analytical techniques ranging from a three-dimensional finite element analysis of a notched specimen to a micromechanical model of a single fiber. In the B/Al laminates, a fiber failure criteria based on the axial and shear stress in the fiber accurately predicted laminate failure for a variety of layups and notch-length to specimen-width ratios with both circular holes and sharp notches when matrix plasticity was included in the analysis. For the SCS-6/Ti-15-3 laminates, a fiber failure based on the axial stress in the fiber correlated well with experimental results for static and postfatigue residual strengths when fiber matrix debonding and matrix cracking were included in the analysis. The micromechanics-based strength prediction methodology offers a direct approach to strength prediction by modeling behavior and damage on a constituent level, thus, explicitly including matrix nonlinearity, fiber matrix debonding, and matrix cracking.

  3. Predicting Dissertation Methodology Choice among Doctoral Candidates at a Faith-Based University

    ERIC Educational Resources Information Center

    Lunde, Rebecca

    2017-01-01

    Limited research has investigated dissertation methodology choice and the factors that contribute to this choice. Quantitative research is based in mathematics and scientific positivism, and qualitative research is based in constructivism. These underlying philosophical differences posit the question if certain factors predict dissertation…

  4. Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul G.

    2000-01-01

    The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.

  5. A methodology for reduced order modeling and calibration of the upper atmosphere

    NASA Astrophysics Data System (ADS)

    Mehta, Piyush M.; Linares, Richard

    2017-10-01

    Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.

  6. Calculation of Shuttle Base Heating Environments and Comparison with Flight Data

    NASA Technical Reports Server (NTRS)

    Greenwood, T. F.; Lee, Y. C.; Bender, R. L.; Carter, R. E.

    1983-01-01

    The techniques, analytical tools, and experimental programs used initially to generate and later to improve and validate the Shuttle base heating design environments are discussed. In general, the measured base heating environments for STS-1 through STS-5 were in good agreement with the preflight predictions. However, some changes were made in the methodology after reviewing the flight data. The flight data is described, preflight predictions are compared with the flight data, and improvements in the prediction methodology based on the data are discussed.

  7. IN SILICO METHODOLOGIES FOR PREDICTIVE EVALUATION OF TOXICITY BASED ON INTEGRATION OF DATABASES

    EPA Science Inventory

    In silico methodologies for predictive evaluation of toxicity based on integration of databases

    Chihae Yang1 and Ann M. Richard2, 1LeadScope, Inc. 1245 Kinnear Rd. Columbus, OH. 43212 2National Health & Environmental Effects Research Lab, U.S. EPA, Research Triangle Park, ...

  8. Fatigue criterion to system design, life and reliability

    NASA Technical Reports Server (NTRS)

    Zaretsky, E. V.

    1985-01-01

    A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.

  9. Observational attachment theory-based parenting measures predict children's attachment narratives independently from social learning theory-based measures.

    PubMed

    Matias, Carla; O'Connor, Thomas G; Futh, Annabel; Scott, Stephen

    2014-01-01

    Conceptually and methodologically distinct models exist for assessing quality of parent-child relationships, but few studies contrast competing models or assess their overlap in predicting developmental outcomes. Using observational methodology, the current study examined the distinctiveness of attachment theory-based and social learning theory-based measures of parenting in predicting two key measures of child adjustment: security of attachment narratives and social acceptance in peer nominations. A total of 113 5-6-year-old children from ethnically diverse families participated. Parent-child relationships were rated using standard paradigms. Measures derived from attachment theory included sensitive responding and mutuality; measures derived from social learning theory included positive attending, directives, and criticism. Child outcomes were independently-rated attachment narrative representations and peer nominations. Results indicated that Attachment theory-based and Social Learning theory-based measures were modestly correlated; nonetheless, parent-child mutuality predicted secure child attachment narratives independently of social learning theory-based measures; in contrast, criticism predicted peer-nominated fighting independently of attachment theory-based measures. In young children, there is some evidence that attachment theory-based measures may be particularly predictive of attachment narratives; however, no single model of measuring parent-child relationships is likely to best predict multiple developmental outcomes. Assessment in research and applied settings may benefit from integration of different theoretical and methodological paradigms.

  10. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    NASA Technical Reports Server (NTRS)

    Celaya, Jose; Kulkarni, Chetan; Biswas, Gautam; Saha, Sankalita; Goebel, Kai

    2011-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  11. Towards A Model-Based Prognostics Methodology for Electrolytic Capacitors: A Case Study Based on Electrical Overstress Accelerated Aging

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan S.; Biswas, Gautam; Goebel, Kai

    2012-01-01

    A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical degradation model. Electrolytic capacitors are used in several applications ranging from power supplies on critical avionics equipment to power drivers for electro-mechanical actuators. These devices are known for their comparatively low reliability and given their criticality in electronics subsystems they are a good candidate for component level prognostics and health management. Prognostics provides a way to assess remaining useful life of a capacitor based on its current state of health and its anticipated future usage and operational conditions. We present here also, experimental results of an accelerated aging test under electrical stresses. The data obtained in this test form the basis for a remaining life prediction algorithm where a model of the degradation process is suggested. This preliminary remaining life prediction algorithm serves as a demonstration of how prognostics methodologies could be used for electrolytic capacitors. In addition, the use degradation progression data from accelerated aging, provides an avenue for validation of applications of the Kalman filter based prognostics methods typically used for remaining useful life predictions in other applications.

  12. A hierarchical clustering methodology for the estimation of toxicity.

    PubMed

    Martin, Todd M; Harten, Paul; Venkatapathy, Raghuraman; Das, Shashikala; Young, Douglas M

    2008-01-01

    ABSTRACT A quantitative structure-activity relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural similarity is defined in terms of 2-D physicochemical descriptors (such as connectivity and E-state indices). A genetic algorithm-based technique is used to generate statistically valid QSAR models for each cluster (using the pool of descriptors described above). The toxicity for a given query compound is estimated using the weighted average of the predictions from the closest cluster from each step in the hierarchical clustering assuming that the compound is within the domain of applicability of the cluster. The hierarchical clustering methodology was tested using a Tetrahymena pyriformis acute toxicity data set containing 644 chemicals in the training set and with two prediction sets containing 339 and 110 chemicals. The results from the hierarchical clustering methodology were compared to the results from several different QSAR methodologies.

  13. Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation.

    PubMed

    Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-01-02

    Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model.

  14. Non-Fourier based thermal-mechanical tissue damage prediction for thermal ablation

    PubMed Central

    Li, Xin; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2017-01-01

    ABSTRACT Prediction of tissue damage under thermal loads plays important role for thermal ablation planning. A new methodology is presented in this paper by combing non-Fourier bio-heat transfer, constitutive elastic mechanics as well as non-rigid motion of dynamics to predict and analyze thermal distribution, thermal-induced mechanical deformation and thermal-mechanical damage of soft tissues under thermal loads. Simulations and comparison analysis demonstrate that the proposed methodology based on the non-Fourier bio-heat transfer can account for the thermal-induced mechanical behaviors of soft tissues and predict tissue thermal damage more accurately than classical Fourier bio-heat transfer based model. PMID:27690290

  15. An in vitro methodology for forecasting luminal concentrations and precipitation of highly permeable lipophilic weak bases in the fasted upper small intestine.

    PubMed

    Psachoulias, Dimitrios; Vertzoni, Maria; Butler, James; Busby, David; Symillides, Moira; Dressman, Jennifer; Reppas, Christos

    2012-12-01

    To develop an in vitro methodology for prediction of concentrations and potential precipitation of highly permeable, lipophilic weak bases in fasted upper small intestine based on ketoconazole and dipyridamole luminal data. Evaluate usefulness of methodology in predicting luminal precipitation of AZD0865 and SB705498 based on plasma data. A three-compartment in vitro setup was used. Depending on the dosage form administered in in vivo studies, a solution or a suspension was placed in the gastric compartment. A medium simulating the luminal environment (FaSSIF-V2plus) was initially placed in the duodenal compartment. Concentrated FaSSIF-V2plus was placed in the reservoir compartment. In vitro ketoconazole and dipyridamole concentrations and precipitated fractions adequately reflected luminal data. Unlike luminal precipitates, in vitro ketoconazole precipitates were crystalline. In vitro AZD0865 data confirmed previously published human pharmacokinetic data suggesting that absorption rates are not affected by luminal precipitation. In vitro SB705498 data predicted that significant luminal precipitation occurs after a 100 mg or 400 mg but not after a 10 mg dose, consistent with human pharmacokinetic data. An in vitro methodology for predicting concentrations and potential precipitation in fasted upper small intestine, after administration of highly permeable, lipophilic weak bases in fasted upper small intestine was developed and evaluated for its predictability in regard to luminal precipitation.

  16. Deterministic and Probabilistic Creep and Creep Rupture Enhancement to CARES/Creep: Multiaxial Creep Life Prediction of Ceramic Structures Using Continuum Damage Mechanics and the Finite Element Method

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1998-01-01

    High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep ripture criterion However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of stress, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of this methodology and the CARES/Creep program.

  17. A Hierarchical Clustering Methodology for the Estimation of Toxicity

    EPA Science Inventory

    A Quantitative Structure Activity Relationship (QSAR) methodology based on hierarchical clustering was developed to predict toxicological endpoints. This methodology utilizes Ward's method to divide a training set into a series of structurally similar clusters. The structural sim...

  18. Analysis of Material Sample Heated by Impinging Hot Hydrogen Jet in a Non-Nuclear Tester

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    A computational conjugate heat transfer methodology was developed and anchored with data obtained from a hot-hydrogen jet heated, non-nuclear materials tester, as a first step towards developing an efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective and thermal radiative, and conjugate heat transfers. Predicted hot hydrogen jet and material surface temperatures were compared with those of measurement. Predicted solid temperatures were compared with those obtained with a standard heat transfer code. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  19. Deterministic Multiaxial Creep and Creep Rupture Enhancements for CARES/Creep Integrated Design Code

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.

    1998-01-01

    High temperature and long duration applications of monolithic ceramics can place their failure mode in the creep rupture regime. A previous model advanced by the authors described a methodology by which the creep rupture life of a loaded component can be predicted. That model was based on the life fraction damage accumulation rule in association with the modified Monkman-Grant creep rupture criterion. However, that model did not take into account the deteriorating state of the material due to creep damage (e.g., cavitation) as time elapsed. In addition, the material creep parameters used in that life prediction methodology, were based on uniaxial creep curves displaying primary and secondary creep behavior, with no tertiary regime. The objective of this paper is to present a creep life prediction methodology based on a modified form of the Kachanov-Rabotnov continuum damage mechanics (CDM) theory. In this theory, the uniaxial creep rate is described in terms of sum, temperature, time, and the current state of material damage. This scalar damage state parameter is basically an abstract measure of the current state of material damage due to creep deformation. The damage rate is assumed to vary with stress, temperature, time, and the current state of damage itself. Multiaxial creep and creep rupture formulations of the CDM approach are presented in this paper. Parameter estimation methodologies based on nonlinear regression analysis are also described for both, isothermal constant stress states and anisothermal variable stress conditions This creep life prediction methodology was preliminarily added to the integrated design code CARES/Creep (Ceramics Analysis and Reliability Evaluation of Structures/Creep), which is a postprocessor program to commercially available finite element analysis (FEA) packages. Two examples, showing comparisons between experimental and predicted creep lives of ceramic specimens, are used to demonstrate the viability of Ns methodology and the CARES/Creep program.

  20. Concomitant prediction of function and fold at the domain level with GO-based profiles.

    PubMed

    Lopez, Daniel; Pazos, Florencio

    2013-01-01

    Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.

  1. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  2. Advances in Structural Integrity Analysis Methods for Aging Metallic Airframe Structures with Local Damage

    NASA Technical Reports Server (NTRS)

    Starnes, James H., Jr.; Newman, James C., Jr.; Harris, Charles E.; Piascik, Robert S.; Young, Richard D.; Rose, Cheryl A.

    2003-01-01

    Analysis methodologies for predicting fatigue-crack growth from rivet holes in panels subjected to cyclic loads and for predicting the residual strength of aluminum fuselage structures with cracks and subjected to combined internal pressure and mechanical loads are described. The fatigue-crack growth analysis methodology is based on small-crack theory and a plasticity induced crack-closure model, and the effect of a corrosive environment on crack-growth rate is included. The residual strength analysis methodology is based on the critical crack-tip-opening-angle fracture criterion that characterizes the fracture behavior of a material of interest, and a geometric and material nonlinear finite element shell analysis code that performs the structural analysis of the fuselage structure of interest. The methodologies have been verified experimentally for structures ranging from laboratory coupons to full-scale structural components. Analytical and experimental results based on these methodologies are described and compared for laboratory coupons and flat panels, small-scale pressurized shells, and full-scale curved stiffened panels. The residual strength analysis methodology is sufficiently general to include the effects of multiple-site damage on structural behavior.

  3. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  4. Ligand and structure-based methodologies for the prediction of the activity of G protein-coupled receptor ligands

    NASA Astrophysics Data System (ADS)

    Costanzi, Stefano; Tikhonova, Irina G.; Harden, T. Kendall; Jacobson, Kenneth A.

    2009-11-01

    Accurate in silico models for the quantitative prediction of the activity of G protein-coupled receptor (GPCR) ligands would greatly facilitate the process of drug discovery and development. Several methodologies have been developed based on the properties of the ligands, the direct study of the receptor-ligand interactions, or a combination of both approaches. Ligand-based three-dimensional quantitative structure-activity relationships (3D-QSAR) techniques, not requiring knowledge of the receptor structure, have been historically the first to be applied to the prediction of the activity of GPCR ligands. They are generally endowed with robustness and good ranking ability; however they are highly dependent on training sets. Structure-based techniques generally do not provide the level of accuracy necessary to yield meaningful rankings when applied to GPCR homology models. However, they are essentially independent from training sets and have a sufficient level of accuracy to allow an effective discrimination between binders and nonbinders, thus qualifying as viable lead discovery tools. The combination of ligand and structure-based methodologies in the form of receptor-based 3D-QSAR and ligand and structure-based consensus models results in robust and accurate quantitative predictions. The contribution of the structure-based component to these combined approaches is expected to become more substantial and effective in the future, as more sophisticated scoring functions are developed and more detailed structural information on GPCRs is gathered.

  5. Nuclear power plant life extension using subsize surveillance specimens. Performance report (4/15/92 - 4/14/98)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Arvind S.

    2001-03-05

    A new methodology to predict the Upper Shelf Energy (USE) of standard Charpy specimens (Full size) based on subsize specimens has been developed. The prediction methodology uses Finite Element Modeling (FEM) to model the fracture behavior. The inputs to FEM are the tensile properties of material and subsize Charpy specimen test data.

  6. Automated combinatorial method for fast and robust prediction of lattice thermal conductivity

    NASA Astrophysics Data System (ADS)

    Plata, Jose J.; Nath, Pinku; Usanmaz, Demet; Toher, Cormac; Fornari, Marco; Buongiorno Nardelli, Marco; Curtarolo, Stefano

    The lack of computationally inexpensive and accurate ab-initio based methodologies to predict lattice thermal conductivity, κl, without computing the anharmonic force constants or performing time-consuming ab-initio molecular dynamics, is one of the obstacles preventing the accelerated discovery of new high or low thermal conductivity materials. The Slack equation is the best alternative to other more expensive methodologies but is highly dependent on two variables: the acoustic Debye temperature, θa, and the Grüneisen parameter, γ. Furthermore, different definitions can be used for these two quantities depending on the model or approximation. Here, we present a combinatorial approach based on the quasi-harmonic approximation to elucidate which definitions of both variables produce the best predictions of κl. A set of 42 compounds was used to test accuracy and robustness of all possible combinations. This approach is ideal for obtaining more accurate values than fast screening models based on the Debye model, while being significantly less expensive than methodologies that solve the Boltzmann transport equation.

  7. A methodology for the assessment of manned flight simulator fidelity

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.; Malsbury, Terry N.

    1989-01-01

    A relatively simple analytical methodology for assessing the fidelity of manned flight simulators for specific vehicles and tasks is offered. The methodology is based upon an application of a structural model of the human pilot, including motion cue effects. In particular, predicted pilot/vehicle dynamic characteristics are obtained with and without simulator limitations. A procedure for selecting model parameters can be implemented, given a probable pilot control strategy. In analyzing a pair of piloting tasks for which flight and simulation data are available, the methodology correctly predicted the existence of simulator fidelity problems. The methodology permitted the analytical evaluation of a change in simulator characteristics and indicated that a major source of the fidelity problems was a visual time delay in the simulation.

  8. A comparison of fatigue life prediction methodologies for rotorcraft

    NASA Technical Reports Server (NTRS)

    Everett, R. A., Jr.

    1990-01-01

    Because of the current U.S. Army requirement that all new rotorcraft be designed to a 'six nines' reliability on fatigue life, this study was undertaken to assess the accuracy of the current safe life philosophy using the nominal stress Palmgrem-Miner linear cumulative damage rule to predict the fatigue life of rotorcraft dynamic components. It has been shown that this methodology can predict fatigue lives that differ from test lives by more than two orders of magnitude. A further objective of this work was to compare the accuracy of this methodology to another safe life method called the local strain approach as well as to a method which predicts fatigue life based solely on crack growth data. Spectrum fatigue tests were run on notched (k(sub t) = 3.2) specimens made of 4340 steel using the Felix/28 tests fairly well, being slightly on the unconservative side of the test data. The crack growth method, which is based on 'small crack' crack growth data and a crack-closure model, also predicted the fatigue lives very well with the predicted lives being slightly longer that the mean test lives but within the experimental scatter band. The crack growth model was also able to predict the change in test lives produced by the rainflow reconstructed spectra.

  9. Towards an Airframe Noise Prediction Methodology: Survey of Current Approaches

    NASA Technical Reports Server (NTRS)

    Farassat, Fereidoun; Casper, Jay H.

    2006-01-01

    In this paper, we present a critical survey of the current airframe noise (AFN) prediction methodologies. Four methodologies are recognized. These are the fully analytic method, CFD combined with the acoustic analogy, the semi-empirical method and fully numerical method. It is argued that for the immediate need of the aircraft industry, the semi-empirical method based on recent high quality acoustic database is the best available method. The method based on CFD and the Ffowcs William- Hawkings (FW-H) equation with penetrable data surface (FW-Hpds ) has advanced considerably and much experience has been gained in its use. However, more research is needed in the near future particularly in the area of turbulence simulation. The fully numerical method will take longer to reach maturity. Based on the current trends, it is predicted that this method will eventually develop into the method of choice. Both the turbulence simulation and propagation methods need to develop more for this method to become useful. Nonetheless, the authors propose that the method based on a combination of numerical and analytical techniques, e.g., CFD combined with FW-H equation, should also be worked on. In this effort, the current symbolic algebra software will allow more analytical approaches to be incorporated into AFN prediction methods.

  10. Predicting operator workload during system design

    NASA Technical Reports Server (NTRS)

    Aldrich, Theodore B.; Szabo, Sandra M.

    1988-01-01

    A workload prediction methodology was developed in response to the need to measure workloads associated with operation of advanced aircraft. The application of the methodology will involve: (1) conducting mission/task analyses of critical mission segments and assigning estimates of workload for the sensory, cognitive, and psychomotor workload components of each task identified; (2) developing computer-based workload prediction models using the task analysis data; and (3) exercising the computer models to produce predictions of crew workload under varying automation and/or crew configurations. Critical issues include reliability and validity of workload predictors and selection of appropriate criterion measures.

  11. Fundamental Algorithms of the Goddard Battery Model

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1985-01-01

    The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.

  12. The influence of biodegradability of sewer solids for the management of CSOs.

    PubMed

    Sakrabani, R; Ashley, R M; Vollertsen, J

    2005-01-01

    The re-suspension of sediments in combined sewers and the associated pollutants into the bulk water during wet weather flows can cause pollutants to be carried further downstream to receiving waters or discharged via Combined Sewer Overflows (CSO). A typical pollutograph shows the trend of released bulk pollutants with time but does not consider information on the biodegradability of these pollutants. A new prediction methodology based on Oxygen Utilisation Rate (respirometric method) and Erosionmeter (laboratory device replicating in-sewer erosion) experiments is proposed which is able to predict the trends in biodegradability during in-sewer sediment erosion in wet weather conditions. The proposed new prediction methodology is also based on COD fractionation techniques.

  13. A comparison of life prediction methodologies for titanium matrix composites subjected to thermomechanical fatigue

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calcaterra, J.R.; Johnson, W.S.; Neu, R.W.

    1997-12-31

    Several methodologies have been developed to predict the lives of titanium matrix composites (TMCs) subjected to thermomechanical fatigue (TMF). This paper reviews and compares five life prediction models developed at NASA-LaRC. Wright Laboratories, based on a dingle parameter, the fiber stress in the load-carrying, or 0{degree}, direction. The two other models, both developed at Wright Labs. are multi-parameter models. These can account for long-term damage, which is beyond the scope of the single-parameter models, but this benefit is offset by the additional complexity of the methodologies. Each of the methodologies was used to model data generated at NASA-LeRC. Wright Labs.more » and Georgia Tech for the SCS-6/Timetal 21-S material system. VISCOPLY, a micromechanical stress analysis code, was used to determine the constituent stress state for each test and was used for each model to maintain consistency. The predictive capabilities of the models are compared, and the ability of each model to accurately predict the responses of tests dominated by differing damage mechanisms is addressed.« less

  14. Thermal sensation prediction by soft computing methodology.

    PubMed

    Jović, Srđan; Arsić, Nebojša; Vilimonović, Jovana; Petković, Dalibor

    2016-12-01

    Thermal comfort in open urban areas is very factor based on environmental point of view. Therefore it is need to fulfill demands for suitable thermal comfort during urban planning and design. Thermal comfort can be modeled based on climatic parameters and other factors. The factors are variables and they are changed throughout the year and days. Therefore there is need to establish an algorithm for thermal comfort prediction according to the input variables. The prediction results could be used for planning of time of usage of urban areas. Since it is very nonlinear task, in this investigation was applied soft computing methodology in order to predict the thermal comfort. The main goal was to apply extreme leaning machine (ELM) for forecasting of physiological equivalent temperature (PET) values. Temperature, pressure, wind speed and irradiance were used as inputs. The prediction results are compared with some benchmark models. Based on the results ELM can be used effectively in forecasting of PET. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  16. Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.

    1997-01-01

    A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.

  17. Development of a Methodology for Predicting Forest Area for Large-Area Resource Monitoring

    Treesearch

    William H. Cooke

    2001-01-01

    The U.S. Department of Agriculture, Forest Service, Southcm Research Station, appointed a remote-sensing team to develop an image-processing methodology for mapping forest lands over large geographic areds. The team has presented a repeatable methodology, which is based on regression modeling of Advanced Very High Resolution Radiometer (AVHRR) and Landsat Thematic...

  18. Aircraft noise prediction program validation

    NASA Technical Reports Server (NTRS)

    Shivashankara, B. N.

    1980-01-01

    A modular computer program (ANOPP) for predicting aircraft flyover and sideline noise was developed. A high quality flyover noise data base for aircraft that are representative of the U.S. commercial fleet was assembled. The accuracy of ANOPP with respect to the data base was determined. The data for source and propagation effects were analyzed and suggestions for improvements to the prediction methodology are given.

  19. Effects of surface chemistry on hot corrosion life

    NASA Technical Reports Server (NTRS)

    Fryxell, R. E.; Leese, G. E.

    1985-01-01

    This program has its primary objective: the development of hot corrosion life prediction methodology based on a combination of laboratory test data and evaluation of field service turbine components which show evidence of hot corrosion. The laboratory program comprises burner rig testing by TRW. A summary of results is given for two series of burner rig tests. The life prediction methodology parameters to be appraised in a final campaign of burner rig tests are outlined.

  20. Predicting the graft survival for heart-lung transplantation patients: an integrated data mining methodology.

    PubMed

    Oztekin, Asil; Delen, Dursun; Kong, Zhenyu James

    2009-12-01

    Predicting the survival of heart-lung transplant patients has the potential to play a critical role in understanding and improving the matching procedure between the recipient and graft. Although voluminous data related to the transplantation procedures is being collected and stored, only a small subset of the predictive factors has been used in modeling heart-lung transplantation outcomes. The previous studies have mainly focused on applying statistical techniques to a small set of factors selected by the domain-experts in order to reveal the simple linear relationships between the factors and survival. The collection of methods known as 'data mining' offers significant advantages over conventional statistical techniques in dealing with the latter's limitations such as normality assumption of observations, independence of observations from each other, and linearity of the relationship between the observations and the output measure(s). There are statistical methods that overcome these limitations. Yet, they are computationally more expensive and do not provide fast and flexible solutions as do data mining techniques in large datasets. The main objective of this study is to improve the prediction of outcomes following combined heart-lung transplantation by proposing an integrated data-mining methodology. A large and feature-rich dataset (16,604 cases with 283 variables) is used to (1) develop machine learning based predictive models and (2) extract the most important predictive factors. Then, using three different variable selection methods, namely, (i) machine learning methods driven variables-using decision trees, neural networks, logistic regression, (ii) the literature review-based expert-defined variables, and (iii) common sense-based interaction variables, a consolidated set of factors is generated and used to develop Cox regression models for heart-lung graft survival. The predictive models' performance in terms of 10-fold cross-validation accuracy rates for two multi-imputed datasets ranged from 79% to 86% for neural networks, from 78% to 86% for logistic regression, and from 71% to 79% for decision trees. The results indicate that the proposed integrated data mining methodology using Cox hazard models better predicted the graft survival with different variables than the conventional approaches commonly used in the literature. This result is validated by the comparison of the corresponding Gains charts for our proposed methodology and the literature review based Cox results, and by the comparison of Akaike information criteria (AIC) values received from each. Data mining-based methodology proposed in this study reveals that there are undiscovered relationships (i.e. interactions of the existing variables) among the survival-related variables, which helps better predict the survival of the heart-lung transplants. It also brings a different set of variables into the scene to be evaluated by the domain-experts and be considered prior to the organ transplantation.

  1. Software reliability studies

    NASA Technical Reports Server (NTRS)

    Hoppa, Mary Ann; Wilson, Larry W.

    1994-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Our research has shown that by improving the quality of the data one can greatly improve the predictions. We are working on methodologies which control some of the randomness inherent in the standard data generation processes in order to improve the accuracy of predictions. Our contribution is twofold in that we describe an experimental methodology using a data structure called the debugging graph and apply this methodology to assess the robustness of existing models. The debugging graph is used to analyze the effects of various fault recovery orders on the predictive accuracy of several well-known software reliability algorithms. We found that, along a particular debugging path in the graph, the predictive performance of different models can vary greatly. Similarly, just because a model 'fits' a given path's data well does not guarantee that the model would perform well on a different path. Further we observed bug interactions and noted their potential effects on the predictive process. We saw that not only do different faults fail at different rates, but that those rates can be affected by the particular debugging stage at which the rates are evaluated. Based on our experiment, we conjecture that the accuracy of a reliability prediction is affected by the fault recovery order as well as by fault interaction.

  2. The cell monolayer trajectory from the system state point of view.

    PubMed

    Stys, Dalibor; Vanek, Jan; Nahlik, Tomas; Urban, Jan; Cisar, Petr

    2011-10-01

    Time-lapse microscopic movies are being increasingly utilized for understanding the derivation of cell states and predicting cell future. Often, fluorescence and other types of labeling are not available or desirable, and cell state-definitions based on observable structures must be used. We present the methodology for cell behavior recognition and prediction based on the short term cell recurrent behavior analysis. This approach has theoretical justification in non-linear dynamics theory. The methodology is based on the general stochastic systems theory which allows us to define the cell states, trajectory and the system itself. We introduce the usage of a novel image content descriptor based on information contribution (gain) by each image point for the cell state characterization as the first step. The linkage between the method and the general system theory is presented as a general frame for cell behavior interpretation. We also discuss extended cell description, system theory and methodology for future development. This methodology may be used for many practical purposes, ranging from advanced, medically relevant, precise cell culture diagnostics to very utilitarian cell recognition in a noisy or uneven image background. In addition, the results are theoretically justified.

  3. A neural network based methodology to predict site-specific spectral acceleration values

    NASA Astrophysics Data System (ADS)

    Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.

    2010-12-01

    A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.

  4. A Physics-Based Engineering Methodology for Calculating Soft Error Rates of Bulk CMOS and SiGe Heterojunction Bipolar Transistor Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Fulkerson, David E.

    2010-02-01

    This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.

  5. Nonlinear data assimilation: towards a prediction of the solar cycle

    NASA Astrophysics Data System (ADS)

    Svedin, Andreas

    The solar cycle is the cyclic variation of solar activity, with a span of 9-14 years. The prediction of the solar cycle is an important and unsolved problem with implications for communications, aviation and other aspects of our high-tech society. Our interest is model-based prediction, and we present a self-consistent procedure for parameter estimation and model state estimation, even when only one of several model variables can be observed. Data assimilation is the art of comparing, combining and transferring observed data into a mathematical model or computer simulation. We use the 3DVAR methodology, based on the notion of least squares, to present an implementation of a traditional data assimilation. Using the Shadowing Filter — a recently developed method for nonlinear data assimilation — we outline a path towards model based prediction of the solar cycle. To achieve this end we solve a number of methodological challenges related to unobserved variables. We also provide a new framework for interpretation that can guide future predictions of the Sun and other astrophysical objects.

  6. Seismic activity prediction using computational intelligence techniques in northern Pakistan

    NASA Astrophysics Data System (ADS)

    Asim, Khawaja M.; Awais, Muhammad; Martínez-Álvarez, F.; Iqbal, Talat

    2017-10-01

    Earthquake prediction study is carried out for the region of northern Pakistan. The prediction methodology includes interdisciplinary interaction of seismology and computational intelligence. Eight seismic parameters are computed based upon the past earthquakes. Predictive ability of these eight seismic parameters is evaluated in terms of information gain, which leads to the selection of six parameters to be used in prediction. Multiple computationally intelligent models have been developed for earthquake prediction using selected seismic parameters. These models include feed-forward neural network, recurrent neural network, random forest, multi layer perceptron, radial basis neural network, and support vector machine. The performance of every prediction model is evaluated and McNemar's statistical test is applied to observe the statistical significance of computational methodologies. Feed-forward neural network shows statistically significant predictions along with accuracy of 75% and positive predictive value of 78% in context of northern Pakistan.

  7. COPRED: prediction of fold, GO molecular function and functional residues at the domain level.

    PubMed

    López, Daniel; Pazos, Florencio

    2013-07-15

    Only recently the first resources devoted to the functional annotation of proteins at the domain level started to appear. The next step is to develop specific methodologies for predicting function at the domain level based on these resources, and to implement them in web servers to be used by the community. In this work, we present COPRED, a web server for the concomitant prediction of fold, molecular function and functional sites at the domain level, based on a methodology for domain molecular function prediction and a resource of domain functional annotations previously developed and benchmarked. COPRED can be freely accessed at http://csbg.cnb.csic.es/copred. The interface works in all standard web browsers. WebGL (natively supported by most browsers) is required for the in-line preview and manipulation of protein 3D structures. The website includes a detailed help section and usage examples. pazos@cnb.csic.es.

  8. A data-driven feature extraction framework for predicting the severity of condition of congestive heart failure patients.

    PubMed

    Sideris, Costas; Alshurafa, Nabil; Pourhomayoun, Mohammad; Shahmohammadi, Farhad; Samy, Lauren; Sarrafzadeh, Majid

    2015-01-01

    In this paper, we propose a novel methodology for utilizing disease diagnostic information to predict severity of condition for Congestive Heart Failure (CHF) patients. Our methodology relies on a novel, clustering-based, feature extraction framework using disease diagnostic information. To reduce the dimensionality we identify disease clusters using cooccurence frequencies. We then utilize these clusters as features to predict patient severity of condition. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 patients. We compare our cluster-based feature set with another that incorporates the Charlson comorbidity score as a feature and demonstrate an accuracy improvement of up to 14% in the predictability of the severity of condition.

  9. A comparative study of soviet versus western helicopters. Part 2: Evaluation of weight, maintainability and design aspects of major components

    NASA Technical Reports Server (NTRS)

    Stepniewski, W. Z.; Shinn, R. A.

    1983-01-01

    A detailed comparative insight into design and operational philosophies of Soviet vs. Western helicopters is provided. This is accomplished by examining conceptual approaches, productibility and maintainability, and weight trends/prediction methodology. Extensive use of Soviet methodology (Tishchenko) to various weight classes of helicopters is compared to the results of using Western based methodology.

  10. A Model for Oil-Gas Pipelines Cost Prediction Based on a Data Mining Process

    NASA Astrophysics Data System (ADS)

    Batzias, Fragiskos A.; Spanidis, Phillip-Mark P.

    2009-08-01

    This paper addresses the problems associated with the cost estimation of oil/gas pipelines during the elaboration of feasibility assessments. Techno-economic parameters, i.e., cost, length and diameter, are critical for such studies at the preliminary design stage. A methodology for the development of a cost prediction model based on Data Mining (DM) process is proposed. The design and implementation of a Knowledge Base (KB), maintaining data collected from various disciplines of the pipeline industry, are presented. The formulation of a cost prediction equation is demonstrated by applying multiple regression analysis using data sets extracted from the KB. Following the methodology proposed, a learning context is inductively developed as background pipeline data are acquired, grouped and stored in the KB, and through a linear regression model provide statistically substantial results, useful for project managers or decision makers.

  11. Prediction of Recidivism in Juvenile Offenders Based on Discriminant Analysis.

    ERIC Educational Resources Information Center

    Proefrock, David W.

    The recent development of strong statistical techniques has made accurate predictions of recidivism possible. To investigate the utility of discriminant analysis methodology in making predictions of recidivism in juvenile offenders, the court records of 271 male and female juvenile offenders, aged 12-16, were reviewed. A cross validation group…

  12. An Integrated Low-Speed Performance and Noise Prediction Methodology for Subsonic Aircraft

    NASA Technical Reports Server (NTRS)

    Olson, E. D.; Mavris, D. N.

    2000-01-01

    An integrated methodology has been assembled to compute the engine performance, takeoff and landing trajectories, and community noise levels for a subsonic commercial aircraft. Where feasible, physics-based noise analysis methods have been used to make the results more applicable to newer, revolutionary designs and to allow for a more direct evaluation of new technologies. The methodology is intended to be used with approximation methods and risk analysis techniques to allow for the analysis of a greater number of variable combinations while retaining the advantages of physics-based analysis. Details of the methodology are described and limited results are presented for a representative subsonic commercial aircraft.

  13. Delta Clipper-Experimental In-Ground Effect on Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1998-01-01

    A quasitransient in-ground effect method is developed to study the effect of vertical landing on a launch vehicle base-heating environment. This computational methodology is based on a three-dimensional, pressure-based, viscous flow, chemically reacting, computational fluid dynamics formulation. Important in-ground base-flow physics such as the fountain-jet formation, plume growth, air entrainment, and plume afterburning are captured with the present methodology. Convective and radiative base-heat fluxes are computed for comparison with those of a flight test. The influence of the laminar Prandtl number on the convective heat flux is included in this study. A radiative direction-dependency test is conducted using both the discrete ordinate and finite volume methods. Treatment of the plume afterburning is found to be very important for accurate prediction of the base-heat fluxes. Convective and radiative base-heat fluxes predicted by the model using a finite rate chemistry option compared reasonably well with flight-test data.

  14. Explosion/Blast Dynamics for Constellation Launch Vehicles Assessment

    NASA Technical Reports Server (NTRS)

    Baer, Mel; Crawford, Dave; Hickox, Charles; Kipp, Marlin; Hertel, Gene; Morgan, Hal; Ratzel, Arthur; Cragg, Clinton H.

    2009-01-01

    An assessment methodology is developed to guide quantitative predictions of adverse physical environments and the subsequent effects on the Ares-1 crew launch vehicle associated with the loss of containment of cryogenic liquid propellants from the upper stage during ascent. Development of the methodology is led by a team at Sandia National Laboratories (SNL) with guidance and support from a number of National Aeronautics and Space Administration (NASA) personnel. The methodology is based on the current Ares-1 design and feasible accident scenarios. These scenarios address containment failure from debris impact or structural response to pressure or blast loading from an external source. Once containment is breached, the envisioned assessment methodology includes predictions for the sequence of physical processes stemming from cryogenic tank failure. The investigative techniques, analysis paths, and numerical simulations that comprise the proposed methodology are summarized and appropriate simulation software is identified in this report.

  15. Integration of infrared thermography into various maintenance methodologies

    NASA Astrophysics Data System (ADS)

    Morgan, William T.

    1993-04-01

    Maintenance methodologies are in developmental stages throughout the world as global competitiveness drives all industries to improve operational efficiencies. Rapid progress in technical advancements has added an additional strain on maintenance organizations to progressively change. Accompanying needs for advanced training and documentation is the demand for utilization of various analytical instruments and quantitative methods. Infrared thermography is one of the primary elements of engineered approaches to maintenance. Current maintenance methodologies can be divided into six categories; Routine ('Breakdown'), Preventive, Predictive, Proactive, Reliability-Based, and Total Productive (TPM) maintenance. Each of these methodologies have distinctive approaches to achieving improved operational efficiencies. Popular though is that infrared thermography is a Predictive maintenance tool. While this is true, it is also true that it can be effectively integrated into each of the maintenance methodologies for achieving desired results. The six maintenance strategies will be defined. Infrared applications integrated into each will be composed in tabular form.

  16. A multichannel model-based methodology for extubation readiness decision of patients on weaning trials.

    PubMed

    Casaseca-de-la-Higuera, Pablo; Simmross-Wattenberg, Federico; Martín-Fernández, Marcos; Alberola-López, Carlos

    2009-07-01

    Discontinuation of mechanical ventilation is a challenging task that involves a number of subtle clinical issues. The gradual removal of the respiratory support (referred to as weaning) should be performed as soon as autonomous respiration can be sustained. However, the prediction rate of successful extubation is still below 25% based on previous studies. Construction of an automatic system that provides information on extubation readiness is thus desirable. Recent works have demonstrated that the breathing pattern variability is a useful extubation readiness indicator, with improving performance when multiple respiratory signals are jointly processed. However, the existing methods for predictor extraction present several drawbacks when length-limited time series are to be processed in heterogeneous groups of patients. In this paper, we propose a model-based methodology for automatic readiness prediction. It is intended to deal with multichannel, nonstationary, short records of the breathing pattern. Results on experimental data yield an 87.27% of successful readiness prediction, which is in line with the best figures reported in the literature. A comparative analysis shows that our methodology overcomes the shortcomings of so far proposed methods when applied to length-limited records on heterogeneous groups of patients.

  17. A methodology for post-EIS (environmental impact statement) monitoring

    USGS Publications Warehouse

    Marcus, Linda Graves

    1979-01-01

    A methodology for monitoring the impacts predicted in environmental impact statements (EIS's) was developed using the EIS on phosphate development in southeastern Idaho as a case study. A monitoring system based on this methodology: (1) coordinates a comprehensive, intergovernmental monitoring effort; (2) documents the major impacts that result, thereby improving the accuracy of impact predictions in future EIS's; (3) helps agencies control impacts by warning them when critical impact levels are reached and by providing feedback on the success of mitigating measures; and (4) limits monitoring data to the essential information that agencies need to carry out their regulatory and environmental protection responsibilities. The methodology is presented as flow charts accompanied by tables that describe the objectives, tasks, and products for each work element in the flow chart.

  18. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.

  19. Copula based prediction models: an application to an aortic regurgitation study

    PubMed Central

    Kumar, Pranesh; Shoukri, Mohamed M

    2007-01-01

    Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974

  20. [Methodological approach to the use of artificial neural networks for predicting results in medicine].

    PubMed

    Trujillano, Javier; March, Jaume; Sorribas, Albert

    2004-01-01

    In clinical practice, there is an increasing interest in obtaining adequate models of prediction. Within the possible available alternatives, the artificial neural networks (ANN) are progressively more used. In this review we first introduce the ANN methodology, describing the most common type of ANN, the Multilayer Perceptron trained with backpropagation algorithm (MLP). Then we compare the MLP with the Logistic Regression (LR). Finally, we show a practical scheme to make an application based on ANN by means of an example with actual data. The main advantage of the RN is its capacity to incorporate nonlinear effects and interactions between the variables of the model without need to include them a priori. As greater disadvantages, they show a difficult interpretation of their parameters and large empiricism in their process of construction and training. ANN are useful for the computation of probabilities of a given outcome based on a set of predicting variables. Furthermore, in some cases, they obtain better results than LR. Both methodologies, ANN and LR, are complementary and they help us to obtain more valid models.

  1. New methodology for fast prediction of wheel wear evolution

    NASA Astrophysics Data System (ADS)

    Apezetxea, I. S.; Perez, X.; Casanueva, C.; Alonso, A.

    2017-07-01

    In railway applications wear prediction in the wheel-rail interface is a fundamental matter in order to study problems such as wheel lifespan and the evolution of vehicle dynamic characteristic with time. However, one of the principal drawbacks of the existing methodologies for calculating the wear evolution is the computational cost. This paper proposes a new wear prediction methodology with a reduced computational cost. This methodology is based on two main steps: the first one is the substitution of the calculations over the whole network by the calculation of the contact conditions in certain characteristic point from whose result the wheel wear evolution can be inferred. The second one is the substitution of the dynamic calculation (time integration calculations) by the quasi-static calculation (the solution of the quasi-static situation of a vehicle at a certain point which is the same that neglecting the acceleration terms in the dynamic equations). These simplifications allow a significant reduction of computational cost to be obtained while maintaining an acceptable level of accuracy (error order of 5-10%). Several case studies are analysed along the paper with the objective of assessing the proposed methodology. The results obtained in the case studies allow concluding that the proposed methodology is valid for an arbitrary vehicle running through an arbitrary track layout.

  2. An evaluation of NASA's program in human factors research: Aircrew-vehicle system interaction

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Research in human factors in the aircraft cockpit and a proposed program augmentation were reviewed. The dramatic growth of microprocessor technology makes it entirely feasible to automate increasingly more functions in the aircraft cockpit; the promise of improved vehicle performance, efficiency, and safety through automation makes highly automated flight inevitable. An organized data base and validated methodology for predicting the effects of automation on human performance and thus on safety are lacking and without such a data base and validated methodology for analyzing human performance, increased automation may introduce new risks. Efforts should be concentrated on developing methods and techniques for analyzing man machine interactions, including human workload and prediction of performance.

  3. Fatigue Life Methodology for Bonded Composite Skin/Stringer Configurations

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Paris, Isabelle L.; OBrien, T. Kevin; Minguet, Pierre J.

    2001-01-01

    A methodology is presented for determining the fatigue life of composite structures based on fatigue characterization data and geometric nonlinear finite element (FE) analyses. To demonstrate the approach, predicted results were compared to fatigue tests performed on specimens which represented a tapered composite flange bonded onto a composite skin. In a first step, tension tests were performed to evaluate the debonding mechanisms between the flange and the skin. In a second step, a 2D FE model was developed to analyze the tests. To predict matrix cracking onset, the relationship between the tension load and the maximum principal stresses transverse to the fiber direction was determined through FE analysis. Transverse tension fatigue life data were used to -enerate an onset fatigue life P-N curve for matrix cracking. The resulting prediction was in good agreement with data from the fatigue tests. In a third step, a fracture mechanics approach based on FE analysis was used to determine the relationship between the tension load and the critical energy release rate. Mixed mode energy release rate fatigue life data were used to create a fatigue life onset G-N curve for delamination. The resulting prediction was in good agreement with data from the fatigue tests. Further, the prediction curve for cumulative life to failure was generated from the previous onset fatigue life curves. The results showed that the methodology offers a significant potential to Predict cumulative fatigue life of composite structures.

  4. On multi-site damage identification using single-site training data

    NASA Astrophysics Data System (ADS)

    Barthorpe, R. J.; Manson, G.; Worden, K.

    2017-11-01

    This paper proposes a methodology for developing multi-site damage location systems for engineering structures that can be trained using single-site damaged state data only. The methodology involves training a sequence of binary classifiers based upon single-site damage data and combining the developed classifiers into a robust multi-class damage locator. In this way, the multi-site damage identification problem may be decomposed into a sequence of binary decisions. In this paper Support Vector Classifiers are adopted as the means of making these binary decisions. The proposed methodology represents an advancement on the state of the art in the field of multi-site damage identification which require either: (1) full damaged state data from single- and multi-site damage cases or (2) the development of a physics-based model to make multi-site model predictions. The potential benefit of the proposed methodology is that a significantly reduced number of recorded damage states may be required in order to train a multi-site damage locator without recourse to physics-based model predictions. In this paper it is first demonstrated that Support Vector Classification represents an appropriate approach to the multi-site damage location problem, with methods for combining binary classifiers discussed. Next, the proposed methodology is demonstrated and evaluated through application to a real engineering structure - a Piper Tomahawk trainer aircraft wing - with its performance compared to classifiers trained using the full damaged-state dataset.

  5. A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Piascik, Robert S.; Newman, James C., Jr.

    1999-01-01

    An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.

  6. A Practical Engineering Approach to Predicting Fatigue Crack Growth in Riveted Lap Joints

    NASA Technical Reports Server (NTRS)

    Harris, C. E.; Piascik, R. S.; Newman, J. C., Jr.

    2000-01-01

    An extensive experimental database has been assembled from very detailed teardown examinations of fatigue cracks found in rivet holes of fuselage structural components. Based on this experimental database, a comprehensive analysis methodology was developed to predict the onset of widespread fatigue damage in lap joints of fuselage structure. Several computer codes were developed with specialized capabilities to conduct the various analyses that make up the comprehensive methodology. Over the past several years, the authors have interrogated various aspects of the analysis methods to determine the degree of computational rigor required to produce numerical predictions with acceptable engineering accuracy. This study led to the formulation of a practical engineering approach to predicting fatigue crack growth in riveted lap joints. This paper describes the practical engineering approach and compares predictions with the results from several experimental studies.

  7. Unsupervised user similarity mining in GSM sensor networks.

    PubMed

    Shad, Shafqat Ali; Chen, Enhong

    2013-01-01

    Mobility data has attracted the researchers for the past few years because of its rich context and spatiotemporal nature, where this information can be used for potential applications like early warning system, route prediction, traffic management, advertisement, social networking, and community finding. All the mentioned applications are based on mobility profile building and user trend analysis, where mobility profile building is done through significant places extraction, user's actual movement prediction, and context awareness. However, significant places extraction and user's actual movement prediction for mobility profile building are a trivial task. In this paper, we present the user similarity mining-based methodology through user mobility profile building by using the semantic tagging information provided by user and basic GSM network architecture properties based on unsupervised clustering approach. As the mobility information is in low-level raw form, our proposed methodology successfully converts it to a high-level meaningful information by using the cell-Id location information rather than previously used location capturing methods like GPS, Infrared, and Wifi for profile mining and user similarity mining.

  8. Mining data from hemodynamic simulations for generating prediction and explanation models.

    PubMed

    Bosnić, Zoran; Vračar, Petar; Radović, Milos D; Devedžić, Goran; Filipović, Nenad D; Kononenko, Igor

    2012-03-01

    One of the most common causes of human death is stroke, which can be caused by carotid bifurcation stenosis. In our work, we aim at proposing a prototype of a medical expert system that could significantly aid medical experts to detect hemodynamic abnormalities (increased artery wall shear stress). Based on the acquired simulated data, we apply several methodologies for1) predicting magnitudes and locations of maximum wall shear stress in the artery, 2) estimating reliability of computed predictions, and 3) providing user-friendly explanation of the model's decision. The obtained results indicate that the evaluated methodologies can provide a useful tool for the given problem domain. © 2012 IEEE

  9. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  10. A decomposition model and voxel selection framework for fMRI analysis to predict neural response of visual stimuli.

    PubMed

    Raut, Savita V; Yadav, Dinkar M

    2018-03-28

    This paper presents an fMRI signal analysis methodology using geometric mean curve decomposition (GMCD) and mutual information-based voxel selection framework. Previously, the fMRI signal analysis has been conducted using empirical mean curve decomposition (EMCD) model and voxel selection on raw fMRI signal. The erstwhile methodology loses frequency component, while the latter methodology suffers from signal redundancy. Both challenges are addressed by our methodology in which the frequency component is considered by decomposing the raw fMRI signal using geometric mean rather than arithmetic mean and the voxels are selected from EMCD signal using GMCD components, rather than raw fMRI signal. The proposed methodologies are adopted for predicting the neural response. Experimentations are conducted in the openly available fMRI data of six subjects, and comparisons are made with existing decomposition models and voxel selection frameworks. Subsequently, the effect of degree of selected voxels and the selection constraints are analyzed. The comparative results and the analysis demonstrate the superiority and the reliability of the proposed methodology.

  11. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting in real time the residual strength of flight structures with discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. A residual strength test of a metallic, integrally-stiffened panel is simulated to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data would, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high-fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  12. Surrogate Modeling of High-Fidelity Fracture Simulations for Real-Time Residual Strength Predictions

    NASA Technical Reports Server (NTRS)

    Spear, Ashley D.; Priest, Amanda R.; Veilleux, Michael G.; Ingraffea, Anthony R.; Hochhalter, Jacob D.

    2011-01-01

    A surrogate model methodology is described for predicting, during flight, the residual strength of aircraft structures that sustain discrete-source damage. Starting with design of experiment, an artificial neural network is developed that takes as input discrete-source damage parameters and outputs a prediction of the structural residual strength. Target residual strength values used to train the artificial neural network are derived from 3D finite element-based fracture simulations. Two ductile fracture simulations are presented to show that crack growth and residual strength are determined more accurately in discrete-source damage cases by using an elastic-plastic fracture framework rather than a linear-elastic fracture mechanics-based method. Improving accuracy of the residual strength training data does, in turn, improve accuracy of the surrogate model. When combined, the surrogate model methodology and high fidelity fracture simulation framework provide useful tools for adaptive flight technology.

  13. A methodology to enhance electromagnetic compatibility in joint military operations

    NASA Astrophysics Data System (ADS)

    Buckellew, William R.

    The development and validation of an improved methodology to identify, characterize, and prioritize potential joint EMI (electromagnetic interference) interactions and identify and develop solutions to reduce the effects of the interference are discussed. The methodology identifies potential EMI problems using results from field operations, historical data bases, and analytical modeling. Operational expertise, engineering analysis, and testing are used to characterize and prioritize the potential EMI problems. Results can be used to resolve potential EMI during the development and acquisition of new systems and to develop engineering fixes and operational workarounds for systems already employed. The analytic modeling portion of the methodology is a predictive process that uses progressive refinement of the analysis and the operational electronic environment to eliminate noninterfering equipment pairs, defer further analysis on pairs lacking operational significance, and resolve the remaining EMI problems. Tests are conducted on equipment pairs to ensure that the analytical models provide a realistic description of the predicted interference.

  14. Developing a methodology to predict PM10 concentrations in urban areas using generalized linear models.

    PubMed

    Garcia, J M; Teodoro, F; Cerdeira, R; Coelho, L M R; Kumar, Prashant; Carvalho, M G

    2016-09-01

    A methodology to predict PM10 concentrations in urban outdoor environments is developed based on the generalized linear models (GLMs). The methodology is based on the relationship developed between atmospheric concentrations of air pollutants (i.e. CO, NO2, NOx, VOCs, SO2) and meteorological variables (i.e. ambient temperature, relative humidity (RH) and wind speed) for a city (Barreiro) of Portugal. The model uses air pollution and meteorological data from the Portuguese monitoring air quality station networks. The developed GLM considers PM10 concentrations as a dependent variable, and both the gaseous pollutants and meteorological variables as explanatory independent variables. A logarithmic link function was considered with a Poisson probability distribution. Particular attention was given to cases with air temperatures both below and above 25°C. The best performance for modelled results against the measured data was achieved for the model with values of air temperature above 25°C compared with the model considering all ranges of air temperatures and with the model considering only temperature below 25°C. The model was also tested with similar data from another Portuguese city, Oporto, and results found to behave similarly. It is concluded that this model and the methodology could be adopted for other cities to predict PM10 concentrations when these data are not available by measurements from air quality monitoring stations or other acquisition means.

  15. Predictive aging results in radiation environments

    NASA Astrophysics Data System (ADS)

    Gillen, Kenneth T.; Clough, Roger L.

    1993-06-01

    We have previously derived a time-temperature-dose rate superposition methodology, which, when applicable, can be used to predict polymer degradation versus dose rate, temperature and exposure time. This methodology results in predictive capabilities at the low dose rates and long time periods appropriate, for instance, to ambient nuclear power plant environments. The methodology was successfully applied to several polymeric cable materials and then verified for two of the materials by comparisons of the model predictions with 12 year, low-dose-rate aging data on these materials from a nuclear environment. In this paper, we provide a more detailed discussion of the methodology and apply it to data obtained on a number of additional nuclear power plant cable insulation (a hypalon, a silicone rubber and two ethylene-tetrafluoroethylenes) and jacket (a hypalon) materials. We then show that the predicted, low-dose-rate results for our materials are in excellent agreement with long-term (7-9 year) low-dose-rate results recently obtained for the same material types actually aged under bnuclear power plant conditions. Based on a combination of the modelling and long-term results, we find indications of reasonably similar degradation responses among several different commercial formulations for each of the following "generic" materials: hypalon, ethylene-tetrafluoroethylene, silicone rubber and PVC. If such "generic" behavior can be further substantiated through modelling and long-term results on additional formulations, predictions of cable life for other commercial materials of the same generic types would be greatly facilitated.

  16. Dynamic determination of kinetic parameters and computer simulation of growth of Clostridium perfringens in cooked beef

    USDA-ARS?s Scientific Manuscript database

    The objective of this research was to develop a new one-step methodology that uses a dynamic approach to directly construct a tertiary model for prediction of the growth of C. perfringens in cooked beef. This methodology was based on numerical analysis and optimization of both primary and secondary...

  17. Methodology Developed for Modeling the Fatigue Crack Growth Behavior of Single-Crystal, Nickel-Base Superalloys

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Because of their superior high-temperature properties, gas generator turbine airfoils made of single-crystal, nickel-base superalloys are fast becoming the standard equipment on today's advanced, high-performance aerospace engines. The increased temperature capabilities of these airfoils has allowed for a significant increase in the operating temperatures in turbine sections, resulting in superior propulsion performance and greater efficiencies. However, the previously developed methodologies for life-prediction models are based on experience with polycrystalline alloys and may not be applicable to single-crystal alloys under certain operating conditions. One of the main areas where behavior differences between single-crystal and polycrystalline alloys are readily apparent is subcritical fatigue crack growth (FCG). The NASA Lewis Research Center's work in this area enables accurate prediction of the subcritical fatigue crack growth behavior in single-crystal, nickel-based superalloys at elevated temperatures.

  18. Predicting boundary shear stress and sediment transport over bed forms

    USGS Publications Warehouse

    McLean, S.R.; Wolfe, S.R.; Nelson, J.M.

    1999-01-01

    To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.To estimate bed-load sediment transport rates in flows over bed forms such as ripples and dunes, spatially averaged velocity profiles are frequently used to predict mean boundary shear stress. However, such averaging obscures the complex, nonlinear interaction of wake decay, boundary-layer development, and topographically induced acceleration downstream of flow separation and often leads to inaccurate estimates of boundary stress, particularly skin friction, which is critically important in predicting bed-load transport rates. This paper presents an alternative methodology for predicting skin friction over 2D bed forms. The approach is based on combining the equations describing the mechanics of the internal boundary layer with semiempirical structure functions to predict the velocity at the crest of a bedform, where the flow is most similar to a uniform boundary layer. Significantly, the methodology is directed toward making specific predictions only at the bed-form crest, and as a result it avoids the difficulty and questionable validity of spatial averaging. The model provides an accurate estimate of the skin friction at the crest where transport rates are highest. Simple geometric constraints can be used to derive the mean transport rates as long as bed load is dominant.

  19. The Energetic Cost of Walking: A Comparison of Predictive Methods

    PubMed Central

    Kramer, Patricia Ann; Sylvester, Adam D.

    2011-01-01

    Background The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is “best”, but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. Methodology/Principal Findings We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Conclusion Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species. PMID:21731693

  20. Regional health care planning: a methodology to cluster facilities using community utilization patterns

    PubMed Central

    2013-01-01

    Background Community-based health care planning and regulation necessitates grouping facilities and areal units into regions of similar health care use. Limited research has explored the methodologies used in creating these regions. We offer a new methodology that clusters facilities based on similarities in patient utilization patterns and geographic location. Our case study focused on Hospital Groups in Michigan, the allocation units used for predicting future inpatient hospital bed demand in the state’s Bed Need Methodology. The scientific, practical, and political concerns that were considered throughout the formulation and development of the methodology are detailed. Methods The clustering methodology employs a 2-step K-means + Ward’s clustering algorithm to group hospitals. The final number of clusters is selected using a heuristic that integrates both a statistical-based measure of cluster fit and characteristics of the resulting Hospital Groups. Results Using recent hospital utilization data, the clustering methodology identified 33 Hospital Groups in Michigan. Conclusions Despite being developed within the politically charged climate of Certificate of Need regulation, we have provided an objective, replicable, and sustainable methodology to create Hospital Groups. Because the methodology is built upon theoretically sound principles of clustering analysis and health care service utilization, it is highly transferable across applications and suitable for grouping facilities or areal units. PMID:23964905

  1. Regional health care planning: a methodology to cluster facilities using community utilization patterns.

    PubMed

    Delamater, Paul L; Shortridge, Ashton M; Messina, Joseph P

    2013-08-22

    Community-based health care planning and regulation necessitates grouping facilities and areal units into regions of similar health care use. Limited research has explored the methodologies used in creating these regions. We offer a new methodology that clusters facilities based on similarities in patient utilization patterns and geographic location. Our case study focused on Hospital Groups in Michigan, the allocation units used for predicting future inpatient hospital bed demand in the state's Bed Need Methodology. The scientific, practical, and political concerns that were considered throughout the formulation and development of the methodology are detailed. The clustering methodology employs a 2-step K-means + Ward's clustering algorithm to group hospitals. The final number of clusters is selected using a heuristic that integrates both a statistical-based measure of cluster fit and characteristics of the resulting Hospital Groups. Using recent hospital utilization data, the clustering methodology identified 33 Hospital Groups in Michigan. Despite being developed within the politically charged climate of Certificate of Need regulation, we have provided an objective, replicable, and sustainable methodology to create Hospital Groups. Because the methodology is built upon theoretically sound principles of clustering analysis and health care service utilization, it is highly transferable across applications and suitable for grouping facilities or areal units.

  2. Conjoint analysis: using a market-based research model for healthcare decision making.

    PubMed

    Mele, Nancy L

    2008-01-01

    Conjoint analysis is a market-based research model that has been used by businesses for more than 35 years to predict consumer preferences in product design and purchasing. Researchers in medicine, healthcare economics, and health policy have discovered the value of this methodology in determining treatment preferences, resource allocation, and willingness to pay. To describe the conjoint analysis methodology and explore value-added applications in nursing research. Conjoint analysis methodology is described, using examples from the healthcare and business literature, and personal experience with the method. Nurses are called upon to increase interdisciplinary research, provide an evidence base for nursing practice, create patient-centered treatments, and revise nursing education. Other disciplines have met challenges like these using conjoint analysis and discrete choice modeling.

  3. A methodology for long range prediction of air transportation

    NASA Technical Reports Server (NTRS)

    Ayati, M. B.; English, J. M.

    1980-01-01

    The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.

  4. Bhageerath-H: A homology/ab initio hybrid server for predicting tertiary structures of monomeric soluble proteins

    PubMed Central

    2014-01-01

    Background The advent of human genome sequencing project has led to a spurt in the number of protein sequences in the databanks. Success of structure based drug discovery severely hinges on the availability of structures. Despite significant progresses in the area of experimental protein structure determination, the sequence-structure gap is continually widening. Data driven homology based computational methods have proved successful in predicting tertiary structures for sequences sharing medium to high sequence similarities. With dwindling similarities of query sequences, advanced homology/ ab initio hybrid approaches are being explored to solve structure prediction problem. Here we describe Bhageerath-H, a homology/ ab initio hybrid software/server for predicting protein tertiary structures with advancing drug design attempts as one of the goals. Results Bhageerath-H web-server was validated on 75 CASP10 targets which showed TM-scores ≥0.5 in 91% of the cases and Cα RMSDs ≤5Å from the native in 58% of the targets, which is well above the CASP10 water mark. Comparison with some leading servers demonstrated the uniqueness of the hybrid methodology in effectively sampling conformational space, scoring best decoys and refining low resolution models to high and medium resolution. Conclusion Bhageerath-H methodology is web enabled for the scientific community as a freely accessible web server. The methodology is fielded in the on-going CASP11 experiment. PMID:25521245

  5. MDRC's Approach to Using Predictive Analytics to Improve and Target Social Services Based on Risk. Reflections on Methodology

    ERIC Educational Resources Information Center

    Porter, Kristin E.; Balu, Rekha; Hendra, Richard

    2017-01-01

    This post is one in a series highlighting MDRC's methodological work. Contributors discuss the refinement and practical use of research methods being employed across the organization. Across policy domains, practitioners and researchers are benefiting from a trend of greater access to both more detailed and frequent data and the increased…

  6. Using physics-based pose predictions and free energy perturbation calculations to predict binding poses and relative binding affinities for FXR ligands in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Athanasiou, Christina; Vasilakaki, Sofia; Dellis, Dimitris; Cournia, Zoe

    2018-01-01

    Computer-aided drug design has become an integral part of drug discovery and development in the pharmaceutical and biotechnology industry, and is nowadays extensively used in the lead identification and lead optimization phases. The drug design data resource (D3R) organizes challenges against blinded experimental data to prospectively test computational methodologies as an opportunity for improved methods and algorithms to emerge. We participated in Grand Challenge 2 to predict the crystallographic poses of 36 Farnesoid X Receptor (FXR)-bound ligands and the relative binding affinities for two designated subsets of 18 and 15 FXR-bound ligands. Here, we present our methodology for pose and affinity predictions and its evaluation after the release of the experimental data. For predicting the crystallographic poses, we used docking and physics-based pose prediction methods guided by the binding poses of native ligands. For FXR ligands with known chemotypes in the PDB, we accurately predicted their binding modes, while for those with unknown chemotypes the predictions were more challenging. Our group ranked #1st (based on the median RMSD) out of 46 groups, which submitted complete entries for the binding pose prediction challenge. For the relative binding affinity prediction challenge, we performed free energy perturbation (FEP) calculations coupled with molecular dynamics (MD) simulations. FEP/MD calculations displayed a high success rate in identifying compounds with better or worse binding affinity than the reference (parent) compound. Our studies suggest that when ligands with chemical precedent are available in the literature, binding pose predictions using docking and physics-based methods are reliable; however, predictions are challenging for ligands with completely unknown chemotypes. We also show that FEP/MD calculations hold predictive value and can nowadays be used in a high throughput mode in a lead optimization project provided that crystal structures of sufficiently high quality are available.

  7. DEVELOPMENT OF QUANTITATIVE STRUCTURE ACTIVITY RELATIONSHIPS (QSARS) TO PREDICT TOXICITY FOR A VARIETY OF HUMAN AND ECOLOGICAL ENDPOINTS

    EPA Science Inventory

    In general, the accuracy of a predicted toxicity value increases with increase in similarity between the query chemical and the chemicals used to develop a QSAR model. A toxicity estimation methodology employing this finding has been developed. A hierarchical based clustering t...

  8. Model-based redesign of global transcription regulation

    PubMed Central

    Carrera, Javier; Rodrigo, Guillermo; Jaramillo, Alfonso

    2009-01-01

    Synthetic biology aims to the design or redesign of biological systems. In particular, one possible goal could be the rewiring of the transcription regulation network by exchanging the endogenous promoters. To achieve this objective, we have adapted current methods to the inference of a model based on ordinary differential equations that is able to predict the network response after a major change in its topology. Our procedure utilizes microarray data for training. We have experimentally validated our inferred global regulatory model in Escherichia coli by predicting transcriptomic profiles under new perturbations. We have also tested our methodology in silico by providing accurate predictions of the underlying networks from expression data generated with artificial genomes. In addition, we have shown the predictive power of our methodology by obtaining the gene profile in experimental redesigns of the E. coli genome, where rewiring the transcriptional network by means of knockouts of master regulators or by upregulating transcription factors controlled by different promoters. Our approach is compatible with most network inference methods, allowing to explore computationally future genome-wide redesign experiments in synthetic biology. PMID:19188257

  9. Prediction of Software Reliability using Bio Inspired Soft Computing Techniques.

    PubMed

    Diwaker, Chander; Tomar, Pradeep; Poonia, Ramesh C; Singh, Vijander

    2018-04-10

    A lot of models have been made for predicting software reliability. The reliability models are restricted to using particular types of methodologies and restricted number of parameters. There are a number of techniques and methodologies that may be used for reliability prediction. There is need to focus on parameters consideration while estimating reliability. The reliability of a system may increase or decreases depending on the selection of different parameters used. Thus there is need to identify factors that heavily affecting the reliability of the system. In present days, reusability is mostly used in the various area of research. Reusability is the basis of Component-Based System (CBS). The cost, time and human skill can be saved using Component-Based Software Engineering (CBSE) concepts. CBSE metrics may be used to assess those techniques which are more suitable for estimating system reliability. Soft computing is used for small as well as large-scale problems where it is difficult to find accurate results due to uncertainty or randomness. Several possibilities are available to apply soft computing techniques in medicine related problems. Clinical science of medicine using fuzzy-logic, neural network methodology significantly while basic science of medicine using neural-networks-genetic algorithm most frequently and preferably. There is unavoidable interest shown by medical scientists to use the various soft computing methodologies in genetics, physiology, radiology, cardiology and neurology discipline. CBSE boost users to reuse the past and existing software for making new products to provide quality with a saving of time, memory space, and money. This paper focused on assessment of commonly used soft computing technique like Genetic Algorithm (GA), Neural-Network (NN), Fuzzy Logic, Support Vector Machine (SVM), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), and Artificial Bee Colony (ABC). This paper presents working of soft computing techniques and assessment of soft computing techniques to predict reliability. The parameter considered while estimating and prediction of reliability are also discussed. This study can be used in estimation and prediction of the reliability of various instruments used in the medical system, software engineering, computer engineering and mechanical engineering also. These concepts can be applied to both software and hardware, to predict the reliability using CBSE.

  10. Unsupervised User Similarity Mining in GSM Sensor Networks

    PubMed Central

    Shad, Shafqat Ali; Chen, Enhong

    2013-01-01

    Mobility data has attracted the researchers for the past few years because of its rich context and spatiotemporal nature, where this information can be used for potential applications like early warning system, route prediction, traffic management, advertisement, social networking, and community finding. All the mentioned applications are based on mobility profile building and user trend analysis, where mobility profile building is done through significant places extraction, user's actual movement prediction, and context awareness. However, significant places extraction and user's actual movement prediction for mobility profile building are a trivial task. In this paper, we present the user similarity mining-based methodology through user mobility profile building by using the semantic tagging information provided by user and basic GSM network architecture properties based on unsupervised clustering approach. As the mobility information is in low-level raw form, our proposed methodology successfully converts it to a high-level meaningful information by using the cell-Id location information rather than previously used location capturing methods like GPS, Infrared, and Wifi for profile mining and user similarity mining. PMID:23576905

  11. Multiphysics Analysis of a Solid-Core Nuclear Thermal Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a hypothetical solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics methodology. Formulations for heat transfer in solids and porous media were implemented and anchored. A two-pronged approach was employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of hydrogen dissociation and recombination on heat transfer and thrust performance. The formulations and preliminary results on both aspects are presented.

  12. Enhanced methods for determining operational capabilities and support costs of proposed space systems

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles

    1993-01-01

    This report documents the work accomplished during the first two years of research to provide support to NASA in predicting operational and support parameters and costs of proposed space systems. The first year's research developed a methodology for deriving reliability and maintainability (R & M) parameters based upon the use of regression analysis to establish empirical relationships between performance and design specifications and corresponding mean times of failure and repair. The second year focused on enhancements to the methodology, increased scope of the model, and software improvements. This follow-on effort expands the prediction of R & M parameters and their effect on the operations and support of space transportation vehicles to include other system components such as booster rockets and external fuel tanks. It also increases the scope of the methodology and the capabilities of the model as implemented by the software. The focus is on the failure and repair of major subsystems and their impact on vehicle reliability, turn times, maintenance manpower, and repairable spares requirements. The report documents the data utilized in this study, outlines the general methodology for estimating and relating R&M parameters, presents the analyses and results of application to the initial data base, and describes the implementation of the methodology through the use of a computer model. The report concludes with a discussion on validation and a summary of the research findings and results.

  13. Survival prediction of trauma patients: a study on US National Trauma Data Bank.

    PubMed

    Sefrioui, I; Amadini, R; Mauro, J; El Fallahi, A; Gabbrielli, M

    2017-12-01

    Exceptional circumstances like major incidents or natural disasters may cause a huge number of victims that might not be immediately and simultaneously saved. In these cases it is important to define priorities avoiding to waste time and resources for not savable victims. Trauma and Injury Severity Score (TRISS) methodology is the well-known and standard system usually used by practitioners to predict the survival probability of trauma patients. However, practitioners have noted that the accuracy of TRISS predictions is unacceptable especially for severely injured patients. Thus, alternative methods should be proposed. In this work we evaluate different approaches for predicting whether a patient will survive or not according to simple and easily measurable observations. We conducted a rigorous, comparative study based on the most important prediction techniques using real clinical data of the US National Trauma Data Bank. Empirical results show that well-known Machine Learning classifiers can outperform the TRISS methodology. Based on our findings, we can say that the best approach we evaluated is Random Forest: it has the best accuracy, the best area under the curve, and k-statistic, as well as the second-best sensitivity and specificity. It has also a good calibration curve. Furthermore, its performance monotonically increases as the dataset size grows, meaning that it can be very effective to exploit incoming knowledge. Considering the whole dataset, it is always better than TRISS. Finally, we implemented a new tool to compute the survival of victims. This will help medical practitioners to obtain a better accuracy than the TRISS tools. Random Forests may be a good candidate solution for improving the predictions on survival upon the standard TRISS methodology.

  14. A flexible data-driven comorbidity feature extraction framework.

    PubMed

    Sideris, Costas; Pourhomayoun, Mohammad; Kalantarian, Haik; Sarrafzadeh, Majid

    2016-06-01

    Disease and symptom diagnostic codes are a valuable resource for classifying and predicting patient outcomes. In this paper, we propose a novel methodology for utilizing disease diagnostic information in a predictive machine learning framework. Our methodology relies on a novel, clustering-based feature extraction framework using disease diagnostic information. To reduce the data dimensionality, we identify disease clusters using co-occurrence statistics. We optimize the number of generated clusters in the training set and then utilize these clusters as features to predict patient severity of condition and patient readmission risk. We build our clustering and feature extraction algorithm using the 2012 National Inpatient Sample (NIS), Healthcare Cost and Utilization Project (HCUP) which contains 7 million hospital discharge records and ICD-9-CM codes. The proposed framework is tested on Ronald Reagan UCLA Medical Center Electronic Health Records (EHR) from 3041 Congestive Heart Failure (CHF) patients and the UCI 130-US diabetes dataset that includes admissions from 69,980 diabetic patients. We compare our cluster-based feature set with the commonly used comorbidity frameworks including Charlson's index, Elixhauser's comorbidities and their variations. The proposed approach was shown to have significant gains between 10.7-22.1% in predictive accuracy for CHF severity of condition prediction and 4.65-5.75% in diabetes readmission prediction. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Accuracy and Calibration of Computational Approaches for Inpatient Mortality Predictive Modeling.

    PubMed

    Nakas, Christos T; Schütz, Narayan; Werners, Marcus; Leichtle, Alexander B

    2016-01-01

    Electronic Health Record (EHR) data can be a key resource for decision-making support in clinical practice in the "big data" era. The complete database from early 2012 to late 2015 involving hospital admissions to Inselspital Bern, the largest Swiss University Hospital, was used in this study, involving over 100,000 admissions. Age, sex, and initial laboratory test results were the features/variables of interest for each admission, the outcome being inpatient mortality. Computational decision support systems were utilized for the calculation of the risk of inpatient mortality. We assessed the recently proposed Acute Laboratory Risk of Mortality Score (ALaRMS) model, and further built generalized linear models, generalized estimating equations, artificial neural networks, and decision tree systems for the predictive modeling of the risk of inpatient mortality. The Area Under the ROC Curve (AUC) for ALaRMS marginally corresponded to the anticipated accuracy (AUC = 0.858). Penalized logistic regression methodology provided a better result (AUC = 0.872). Decision tree and neural network-based methodology provided even higher predictive performance (up to AUC = 0.912 and 0.906, respectively). Additionally, decision tree-based methods can efficiently handle Electronic Health Record (EHR) data that have a significant amount of missing records (in up to >50% of the studied features) eliminating the need for imputation in order to have complete data. In conclusion, we show that statistical learning methodology can provide superior predictive performance in comparison to existing methods and can also be production ready. Statistical modeling procedures provided unbiased, well-calibrated models that can be efficient decision support tools for predicting inpatient mortality and assigning preventive measures.

  16. Statistical Anomalies of Bitflips in SRAMs to Discriminate SBUs From MCUs

    NASA Astrophysics Data System (ADS)

    Clemente, Juan Antonio; Franco, Francisco J.; Villa, Francesca; Baylac, Maud; Rey, Solenne; Mecha, Hortensia; Agapito, Juan A.; Puchner, Helmut; Hubert, Guillaume; Velazco, Raoul

    2016-08-01

    Recently, the occurrence of multiple events in static tests has been investigated by checking the statistical distribution of the difference between the addresses of the words containing bitflips. That method has been successfully applied to Field Programmable Gate Arrays (FPGAs) and the original authors indicate that it is also valid for SRAMs. This paper presents a modified methodology that is based on checking the XORed addresses with bitflips, rather than on the difference. Irradiation tests on CMOS 130 & 90 nm SRAMs with 14-MeV neutrons have been performed to validate this methodology. Results in high-altitude environments are also presented and cross-checked with theoretical predictions. In addition, this methodology has also been used to detect modifications in the organization of said memories. Theoretical predictions have been validated with actual data provided by the manufacturer.

  17. Use of a GIS-based hybrid artificial neural network to prioritize the order of pipe replacement in a water distribution network.

    PubMed

    Ho, Cheng-I; Lin, Min-Der; Lo, Shang-Lien

    2010-07-01

    A methodology based on the integration of a seismic-based artificial neural network (ANN) model and a geographic information system (GIS) to assess water leakage and to prioritize pipeline replacement is developed in this work. Qualified pipeline break-event data derived from the Taiwan Water Corporation Pipeline Leakage Repair Management System were analyzed. "Pipe diameter," "pipe material," and "the number of magnitude-3( + ) earthquakes" were employed as the input factors of ANN, while "the number of monthly breaks" was used for the prediction output. This study is the first attempt to manipulate earthquake data in the break-event ANN prediction model. Spatial distribution of the pipeline break-event data was analyzed and visualized by GIS. Through this, the users can swiftly figure out the hotspots of the leakage areas. A northeastern township in Taiwan, frequently affected by earthquakes, is chosen as the case study. Compared to the traditional processes for determining the priorities of pipeline replacement, the methodology developed is more effective and efficient. Likewise, the methodology can overcome the difficulty of prioritizing pipeline replacement even in situations where the break-event records are unavailable.

  18. Assessment of the Unstructured Grid Software TetrUSS for Drag Prediction of the DLR-F4 Configuration

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Frink, Neal T.

    2002-01-01

    An application of the NASA unstructured grid software system TetrUSS is presented for the prediction of aerodynamic drag on a transport configuration. The paper briefly describes the underlying methodology and summarizes the results obtained on the DLR-F4 transport configuration recently presented in the first AIAA computational fluid dynamics (CFD) Drag Prediction Workshop. TetrUSS is a suite of loosely coupled unstructured grid CFD codes developed at the NASA Langley Research Center. The meshing approach is based on the advancing-front and the advancing-layers procedures. The flow solver employs a cell-centered, finite volume scheme for solving the Reynolds Averaged Navier-Stokes equations on tetrahedral grids. For the present computations, flow in the viscous sublayer has been modeled with an analytical wall function. The emphasis of the paper is placed on the practicality of the methodology for accurately predicting aerodynamic drag data.

  19. Discriminant forest classification method and system

    DOEpatents

    Chen, Barry Y.; Hanley, William G.; Lemmond, Tracy D.; Hiller, Lawrence J.; Knapp, David A.; Mugge, Marshall J.

    2012-11-06

    A hybrid machine learning methodology and system for classification that combines classical random forest (RF) methodology with discriminant analysis (DA) techniques to provide enhanced classification capability. A DA technique which uses feature measurements of an object to predict its class membership, such as linear discriminant analysis (LDA) or Andersen-Bahadur linear discriminant technique (AB), is used to split the data at each node in each of its classification trees to train and grow the trees and the forest. When training is finished, a set of n DA-based decision trees of a discriminant forest is produced for use in predicting the classification of new samples of unknown class.

  20. A mechanics framework for a progressive failure methodology for laminated composites

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Lo, David C.

    1989-01-01

    A laminate strength and life prediction methodology has been postulated for laminated composites which accounts for the progressive development of microstructural damage to structural failure. A damage dependent constitutive model predicts the stress redistribution in an average sense that accompanies damage development in laminates. Each mode of microstructural damage is represented by a second-order tensor valued internal state variable which is a strain like quantity. The mechanics framework together with the global-local strategy for predicting laminate strength and life is presented in the paper. The kinematic effects of damage are represented by effective engineering moduli in the global analysis and the results of the global analysis provide the boundary conditions for the local ply level stress analysis. Damage evolution laws are based on experimental results.

  1. On the spot damage detection methodology for highway bridges during natural crises : tech transfer summary.

    DOT National Transportation Integrated Search

    2010-07-01

    The objective of this work was to develop a : low-cost portable damage detection tool to : assess and predict damage areas in highway : bridges. : The proposed tool was based on standard : vibration-based damage identification (VBDI) : techniques but...

  2. Development of a 3D numerical methodology for fast prediction of gun blast induced loading

    NASA Astrophysics Data System (ADS)

    Costa, E.; Lagasco, F.

    2014-05-01

    In this paper, the development of a methodology based on semi-empirical models from the literature to carry out 3D prediction of pressure loading on surfaces adjacent to a weapon system during firing is presented. This loading is consequent to the impact of the blast wave generated by the projectile exiting the muzzle bore. When exceeding a pressure threshold level, loading is potentially capable to induce unwanted damage to nearby hard structures as well as frangible panels or electronic equipment. The implemented model shows the ability to quickly predict the distribution of the blast wave parameters over three-dimensional complex geometry surfaces when the weapon design and emplacement data as well as propellant and projectile characteristics are available. Considering these capabilities, the use of the proposed methodology is envisaged as desirable in the preliminary design phase of the combat system to predict adverse effects and then enable to identify the most appropriate countermeasures. By providing a preliminary but sensitive estimate of the operative environmental loading, this numerical means represents a good alternative to more powerful, but time consuming advanced computational fluid dynamics tools, which use can, thus, be limited to the final phase of the design.

  3. On-line capillary electrophoresis-based enzymatic methodology for the study of polymer-drug conjugates.

    PubMed

    Coussot, G; Ladner, Y; Bayart, C; Faye, C; Vigier, V; Perrin, C

    2015-01-09

    This work aims at studying the potentialities of an on-line capillary electrophoresis (CE)-based digestion methodology for evaluating polymer-drug conjugates degradability in the presence of free trypsin (in-solution digestion). A sandwich plugs injection scheme with transverse diffusion of laminar profile (TDLFP) mode was used to achieve on-line digestions. Electrophoretic separation conditions were established using poly-l-Lysine (PLL) as reference substrate. Comparison with off-line digestion was carried out to demonstrate the feasibility of the proposed methodology. The applicability of the on-line CE-based digestion methodology was evaluated for two PLL-drug conjugates and for the four first generations of dendrigraft of lysine (DGL). Different electrophoretic profiles presenting the formation of di, tri, and tetralysine were observed for PLL-drug and DGL. These findings are in good agreement with the nature of the linker used to link the drug to PLL structure and the predicted degradability of DGL. The present on-line methodology applicability was also successfully proven for protein conjugates hydrolysis. In summary, the described methodology provides a powerful tool for the rapid study of biodegradable polymers. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Evaluation of a Records-Review Surveillance System Used to Determine the Prevalence of Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Avchen, Rachel Nonkin; Wiggins, Lisa D.; Devine, Owen; Van Naarden Braun, Kim; Rice, Catherine; Hobson, Nancy C.; Schendel, Diana; Yeargin-Allsopp, Marshalyn

    2011-01-01

    We conducted the first study that estimates the sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) of a population-based autism spectrum disorders (ASD) surveillance system developed at the Centers for Disease Control and Prevention. The system employs a records-review methodology that yields ASD…

  5. Multiphysics Thrust Chamber Modeling for Nuclear Thermal Propulsion

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Cheng, Gary; Chen, Yen-Sen

    2006-01-01

    The objective of this effort is to develop an efficient and accurate thermo-fluid computational methodology to predict environments for a solid-core, nuclear thermal engine thrust chamber. The computational methodology is based on an unstructured-grid, pressure-based computational fluid dynamics formulation. A two-pronged approach is employed in this effort: A detailed thermo-fluid analysis on a multi-channel flow element for mid-section corrosion investigation; and a global modeling of the thrust chamber to understand the effect of heat transfer on thrust performance. Preliminary results on both aspects are presented.

  6. NIR spectroscopy for the quality control of Moringa oleifera (Lam.) leaf powders: Prediction of minerals, protein and moisture contents.

    PubMed

    Rébufa, Catherine; Pany, Inès; Bombarda, Isabelle

    2018-09-30

    A rapid methodology was developed to simultaneously predict water content and activity values (a w ) of Moringa oleifera leaf powders (MOLP) using near infrared (NIR) signatures and experimental sorption isotherms. NIR spectra of MOLP samples (n = 181) were recorded. A Partial Least Square Regression model (PLS2) was obtained with low standard errors of prediction (SEP of 1.8% and 0.07 for water content and a w respectively). Experimental sorption isotherms obtained at 20, 30 and 40 °C showed similar profiles. This result is particularly important to use MOLP in food industry. In fact, a temperature variation of the drying process will not affect their available water content (self-life). Nutrient contents based on protein and selected minerals (Ca, Fe, K) were also predicted from PLS1 models. Protein contents were well predicted (SEP of 2.3%). This methodology allowed for an improvement in MOLP safety, quality control and traceability. Published by Elsevier Ltd.

  7. Avoiding reification. Heuristic effectiveness of mathematics and the prediction of the Ω- particle

    NASA Astrophysics Data System (ADS)

    Ginammi, Michele

    2016-02-01

    According to Steiner (1998), in contemporary physics new important discoveries are often obtained by means of strategies which rely on purely formal mathematical considerations. In such discoveries, mathematics seems to have a peculiar and controversial role, which apparently cannot be accounted for by means of standard methodological criteria. M. Gell-Mann and Y. Ne'eman's prediction of the Ω- particle is usually considered a typical example of application of this kind of strategy. According to Bangu (2008), this prediction is apparently based on the employment of a highly controversial principle-what he calls the "reification principle". Bangu himself takes this principle to be methodologically unjustifiable, but still indispensable to make the prediction logically sound. In the present paper I will offer a new reconstruction of the reasoning that led to this prediction. By means of this reconstruction, I will show that we do not need to postulate any "reificatory" role of mathematics in contemporary physics and I will contextually clarify the representative and heuristic role of mathematics in science.

  8. Application of Observing System Simulation Experiments (OSSEs) to determining science and user requirements for space-based missions

    NASA Astrophysics Data System (ADS)

    Atlas, R. M.

    2016-12-01

    Observing System Simulation Experiments (OSSEs) provide an effective method for evaluating the potential impact of proposed new observing systems, as well as for evaluating trade-offs in observing system design, and in developing and assessing improved methodology for assimilating new observations. As such, OSSEs can be an important tool for determining science and user requirements, and for incorporating these requirements into the planning for future missions. Detailed OSSEs have been conducted at NASA/ GSFC and NOAA/AOML in collaboration with Simpson Weather Associates and operational data assimilation centers over the last three decades. These OSSEs determined correctly the quantitative potential for several proposed satellite observing systems to improve weather analysis and prediction prior to their launch, evaluated trade-offs in orbits, coverage and accuracy for space-based wind lidars, and were used in the development of the methodology that led to the first beneficial impacts of satellite surface winds on numerical weather prediction. In this talk, the speaker will summarize the development of OSSE methodology, early and current applications of OSSEs and how OSSEs will evolve in order to enhance mission planning.

  9. Creep Life Prediction of Ceramic Components Using the Finite Element Based Integrated Design Program (CARES/Creep)

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama M.; Powers, Lynn M.; Gyekenyesi, John P.

    1997-01-01

    The desirable properties of ceramics at high temperatures have generated interest in their use for structural applications such as in advanced turbine systems. Design lives for such systems can exceed 10,000 hours. Such long life requirements necessitate subjecting the components to relatively low stresses. The combination of high temperatures and low stresses typically places failure for monolithic ceramics in the creep regime. The objective of this work is to present a design methodology for predicting the lifetimes of structural components subjected to multiaxial creep loading. This methodology utilizes commercially available finite element packages and takes into account the time varying creep stress distributions (stress relaxation). In this methodology, the creep life of a component is divided into short time steps, during which, the stress and strain distributions are assumed constant. The damage, D, is calculated for each time step based on a modified Monkman-Grant creep rupture criterion. For components subjected to predominantly tensile loading, failure is assumed to occur when the normalized accumulated damage at any point in the component is greater than or equal to unity.

  10. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  11. Methodology for estimating human perception to tremors in high-rise buildings

    NASA Astrophysics Data System (ADS)

    Du, Wenqi; Goh, Key Seng; Pan, Tso-Chien

    2017-07-01

    Human perception to tremors during earthquakes in high-rise buildings is usually associated with psychological discomfort such as fear and anxiety. This paper presents a methodology for estimating the level of perception to tremors for occupants living in high-rise buildings subjected to ground motion excitations. Unlike other approaches based on empirical or historical data, the proposed methodology performs a regression analysis using the analytical results of two generic models of 15 and 30 stories. The recorded ground motions in Singapore are collected and modified for structural response analyses. Simple predictive models are then developed to estimate the perception level to tremors based on a proposed ground motion intensity parameter—the average response spectrum intensity in the period range between 0.1 and 2.0 s. These models can be used to predict the percentage of occupants in high-rise buildings who may perceive the tremors at a given ground motion intensity. Furthermore, the models are validated with two recent tremor events reportedly felt in Singapore. It is found that the estimated results match reasonably well with the reports in the local newspapers and from the authorities. The proposed methodology is applicable to urban regions where people living in high-rise buildings might feel tremors during earthquakes.

  12. Accounting for Uncertainties in Strengths of SiC MEMS Parts

    NASA Technical Reports Server (NTRS)

    Nemeth, Noel; Evans, Laura; Beheim, Glen; Trapp, Mark; Jadaan, Osama; Sharpe, William N., Jr.

    2007-01-01

    A methodology has been devised for accounting for uncertainties in the strengths of silicon carbide structural components of microelectromechanical systems (MEMS). The methodology enables prediction of the probabilistic strengths of complexly shaped MEMS parts using data from tests of simple specimens. This methodology is intended to serve as a part of a rational basis for designing SiC MEMS, supplementing methodologies that have been borrowed from the art of designing macroscopic brittle material structures. The need for this or a similar methodology arises as a consequence of the fundamental nature of MEMS and the brittle silicon-based materials of which they are typically fabricated. When tested to fracture, MEMS and structural components thereof show wide part-to-part scatter in strength. The methodology involves the use of the Ceramics Analysis and Reliability Evaluation of Structures Life (CARES/Life) software in conjunction with the ANSYS Probabilistic Design System (PDS) software to simulate or predict the strength responses of brittle material components while simultaneously accounting for the effects of variability of geometrical features on the strength responses. As such, the methodology involves the use of an extended version of the ANSYS/CARES/PDS software system described in Probabilistic Prediction of Lifetimes of Ceramic Parts (LEW-17682-1/4-1), Software Tech Briefs supplement to NASA Tech Briefs, Vol. 30, No. 9 (September 2006), page 10. The ANSYS PDS software enables the ANSYS finite-element-analysis program to account for uncertainty in the design-and analysis process. The ANSYS PDS software accounts for uncertainty in material properties, dimensions, and loading by assigning probabilistic distributions to user-specified model parameters and performing simulations using various sampling techniques.

  13. Improving Computational Efficiency of Prediction in Model-Based Prognostics Using the Unscented Transform

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew John; Goebel, Kai Frank

    2010-01-01

    Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.

  14. An experimental system for flood risk forecasting at global scale

    NASA Astrophysics Data System (ADS)

    Alfieri, L.; Dottori, F.; Kalas, M.; Lorini, V.; Bianchi, A.; Hirpa, F. A.; Feyen, L.; Salamon, P.

    2016-12-01

    Global flood forecasting and monitoring systems are nowadays a reality and are being applied by an increasing range of users and practitioners in disaster risk management. Furthermore, there is an increasing demand from users to integrate flood early warning systems with risk based forecasts, combining streamflow estimations with expected inundated areas and flood impacts. To this end, we have developed an experimental procedure for near-real time flood mapping and impact assessment based on the daily forecasts issued by the Global Flood Awareness System (GloFAS). The methodology translates GloFAS streamflow forecasts into event-based flood hazard maps based on the predicted flow magnitude and the forecast lead time and a database of flood hazard maps with global coverage. Flood hazard maps are then combined with exposure and vulnerability information to derive flood risk. Impacts of the forecasted flood events are evaluated in terms of flood prone areas, potential economic damage, and affected population, infrastructures and cities. To further increase the reliability of the proposed methodology we integrated model-based estimations with an innovative methodology for social media monitoring, which allows for real-time verification of impact forecasts. The preliminary tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management. In particular, the link with social media is crucial for improving the accuracy of impact predictions.

  15. Evaluation of relative response factor methodology for demonstrating attainment of ozone in Houston, Texas.

    PubMed

    Vizuete, William; Biton, Leiran; Jeffries, Harvey E; Couzo, Evan

    2010-07-01

    In 2007, the U.S. Environmental Protection Agency (EPA) released guidance on demonstrating attainment of the federal ozone (O3) standard. This guidance recommended a change in the use of air quality model (AQM) predictions from an absolute to a relative way. This was accomplished by using a ratio, and not the absolute difference of AQM O3 predictions from a historical year to an attainment year. This ratio of O3 concentrations, labeled the relative response factor (RRF), is multiplied by an average of observed concentrations at every monitor. In this analysis, whether the methodology used to calculate RRFs is severing the source-receptor relationship for a given monitor was investigated. Model predictions were generated with a regulatory AQM system used to support the 2004 Houston-Galveston-Brazoria State Implementation Plan. Following the procedures in the EPA guidance, an attainment demonstration was completed using regulatory AQM predictions and measurements from the Houston ground-monitoring network. Results show that the model predictions used for the RRF calculation were often based on model conditions that were geographically remote from observations and counter to wind flow. Many of the monitors used the same model predictions for an RRF, even if that O3 plume did not impact it. The RRF methodology resulted in severing the true source-receptor relationship for a monitor. This analysis also showed that model performance could influence RRF values, and values at monitoring sites appear to be sensitive to model bias. Results indicate an inverse linear correlation of RRFs with model bias at each monitor (R2 = 0.47), resulting in a change in future O3 design values up to 5 parts per billion (ppb). These results suggest that the application of RRF methodology in Houston, TX, should be changed from using all model predictions above 85 ppb to a method that removes any predictions that are not relevant to the observed source-receptor relationship.

  16. Investigation of Super Learner Methodology on HIV-1 Small Sample: Application on Jaguar Trial Data.

    PubMed

    Houssaïni, Allal; Assoumou, Lambert; Marcelin, Anne Geneviève; Molina, Jean Michel; Calvez, Vincent; Flandre, Philippe

    2012-01-01

    Background. Many statistical models have been tested to predict phenotypic or virological response from genotypic data. A statistical framework called Super Learner has been introduced either to compare different methods/learners (discrete Super Learner) or to combine them in a Super Learner prediction method. Methods. The Jaguar trial is used to apply the Super Learner framework. The Jaguar study is an "add-on" trial comparing the efficacy of adding didanosine to an on-going failing regimen. Our aim was also to investigate the impact on the use of different cross-validation strategies and different loss functions. Four different repartitions between training set and validations set were tested through two loss functions. Six statistical methods were compared. We assess performance by evaluating R(2) values and accuracy by calculating the rates of patients being correctly classified. Results. Our results indicated that the more recent Super Learner methodology of building a new predictor based on a weighted combination of different methods/learners provided good performance. A simple linear model provided similar results to those of this new predictor. Slight discrepancy arises between the two loss functions investigated, and slight difference arises also between results based on cross-validated risks and results from full dataset. The Super Learner methodology and linear model provided around 80% of patients correctly classified. The difference between the lower and higher rates is around 10 percent. The number of mutations retained in different learners also varys from one to 41. Conclusions. The more recent Super Learner methodology combining the prediction of many learners provided good performance on our small dataset.

  17. The multi-copy simultaneous search methodology: a fundamental tool for structure-based drug design.

    PubMed

    Schubert, Christian R; Stultz, Collin M

    2009-08-01

    Fragment-based ligand design approaches, such as the multi-copy simultaneous search (MCSS) methodology, have proven to be useful tools in the search for novel therapeutic compounds that bind pre-specified targets of known structure. MCSS offers a variety of advantages over more traditional high-throughput screening methods, and has been applied successfully to challenging targets. The methodology is quite general and can be used to construct functionality maps for proteins, DNA, and RNA. In this review, we describe the main aspects of the MCSS method and outline the general use of the methodology as a fundamental tool to guide the design of de novo lead compounds. We focus our discussion on the evaluation of MCSS results and the incorporation of protein flexibility into the methodology. In addition, we demonstrate on several specific examples how the information arising from the MCSS functionality maps has been successfully used to predict ligand binding to protein targets and RNA.

  18. Application of new methodologies based on design of experiments, independent component analysis and design space for robust optimization in liquid chromatography.

    PubMed

    Debrus, Benjamin; Lebrun, Pierre; Ceccato, Attilio; Caliaro, Gabriel; Rozet, Eric; Nistor, Iolanda; Oprean, Radu; Rupérez, Francisco J; Barbas, Coral; Boulanger, Bruno; Hubert, Philippe

    2011-04-08

    HPLC separations of an unknown sample mixture and a pharmaceutical formulation have been optimized using a recently developed chemometric methodology proposed by W. Dewé et al. in 2004 and improved by P. Lebrun et al. in 2008. This methodology is based on experimental designs which are used to model retention times of compounds of interest. Then, the prediction accuracy and the optimal separation robustness, including the uncertainty study, were evaluated. Finally, the design space (ICH Q8(R1) guideline) was computed as the probability for a criterion to lie in a selected range of acceptance. Furthermore, the chromatograms were automatically read. Peak detection and peak matching were carried out with a previously developed methodology using independent component analysis published by B. Debrus et al. in 2009. The present successful applications strengthen the high potential of these methodologies for the automated development of chromatographic methods. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Predicting trends of invasive plants richness using local socio-economic data: An application in North Portugal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Mario, E-mail: mgsantoss@gmail.com; Freitas, Raul, E-mail: raulfreitas@portugalmail.com; Crespi, Antonio L., E-mail: aluis.crespi@gmail.com

    2011-10-15

    This study assesses the potential of an integrated methodology for predicting local trends in invasive exotic plant species (invasive richness) using indirect, regional information on human disturbance. The distribution of invasive plants was assessed in North Portugal using herbarium collections and local environmental, geophysical and socio-economic characteristics. Invasive richness response to anthropogenic disturbance was predicted using a dynamic model based on a sequential modeling process (stochastic dynamic methodology-StDM). Derived scenarios showed that invasive richness trends were clearly associated with ongoing socio-economic change. Simulations including scenarios of growing urbanization showed an increase in invasive richness while simulations in municipalities with decreasingmore » populations showed stable or decreasing levels of invasive richness. The model simulations demonstrate the interest and feasibility of using this methodology in disturbance ecology. - Highlights: {yields} Socio-economic data indicate human induced disturbances. {yields} Socio-economic development increase disturbance in ecosystems. {yields} Disturbance promotes opportunities for invasive plants.{yields} Increased opportunities promote richness of invasive plants.{yields} Increase in richness of invasive plants change natural ecosystems.« less

  20. Can weekly noise levels of urban road traffic, as predominant noise source, estimate annual ones?

    PubMed

    Prieto Gajardo, Carlos; Barrigón Morillas, Juan Miguel; Rey Gozalo, Guillermo; Vílchez-Gómez, Rosendo

    2016-11-01

    The effects of noise pollution on human quality of life and health were recognised by the World Health Organisation a long time ago. There is a crucial dilemma for the study of urban noise when one is looking for proven methodologies that can allow, on the one hand, an increase in the quality of predictions, and on the other hand, saving resources in the spatial and temporal sampling. The temporal structure of urban noise is studied in this work from a different point of view. This methodology, based on Fourier analysis, is applied to several measurements of urban noise, mainly from road traffic and one-week long, carried out in two cities located on different continents and with different sociological life styles (Cáceres, Spain and Talca, Chile). Its capacity to predict annual noise levels from weekly measurements is studied. The relation between this methodology and the categorisation method is also analysed.

  1. Multivariate Formation Pressure Prediction with Seismic-derived Petrophysical Properties from Prestack AVO inversion and Poststack Seismic Motion Inversion

    NASA Astrophysics Data System (ADS)

    Yu, H.; Gu, H.

    2017-12-01

    A novel multivariate seismic formation pressure prediction methodology is presented, which incorporates high-resolution seismic velocity data from prestack AVO inversion, and petrophysical data (porosity and shale volume) derived from poststack seismic motion inversion. In contrast to traditional seismic formation prediction methods, the proposed methodology is based on a multivariate pressure prediction model and utilizes a trace-by-trace multivariate regression analysis on seismic-derived petrophysical properties to calibrate model parameters in order to make accurate predictions with higher resolution in both vertical and lateral directions. With prestack time migration velocity as initial velocity model, an AVO inversion was first applied to prestack dataset to obtain high-resolution seismic velocity with higher frequency that is to be used as the velocity input for seismic pressure prediction, and the density dataset to calculate accurate Overburden Pressure (OBP). Seismic Motion Inversion (SMI) is an inversion technique based on Markov Chain Monte Carlo simulation. Both structural variability and similarity of seismic waveform are used to incorporate well log data to characterize the variability of the property to be obtained. In this research, porosity and shale volume are first interpreted on well logs, and then combined with poststack seismic data using SMI to build porosity and shale volume datasets for seismic pressure prediction. A multivariate effective stress model is used to convert velocity, porosity and shale volume datasets to effective stress. After a thorough study of the regional stratigraphic and sedimentary characteristics, a regional normally compacted interval model is built, and then the coefficients in the multivariate prediction model are determined in a trace-by-trace multivariate regression analysis on the petrophysical data. The coefficients are used to convert velocity, porosity and shale volume datasets to effective stress and then to calculate formation pressure with OBP. Application of the proposed methodology to a research area in East China Sea has proved that the method can bridge the gap between seismic and well log pressure prediction and give predicted pressure values close to pressure meassurements from well testing.

  2. Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors

    NASA Technical Reports Server (NTRS)

    Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele

    2010-01-01

    This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.

  3. A methodology to predict damage initiation, damage growth and residual strength in titanium matrix composites

    NASA Technical Reports Server (NTRS)

    Bakuckas, J. G., Jr.; Johnson, W. S.

    1994-01-01

    In this research, a methodology to predict damage initiation, damage growth, fatigue life, and residual strength in titanium matrix composites (TMC) is outlined. Emphasis was placed on micromechanics-based engineering approaches. Damage initiation was predicted using a local effective strain approach. A finite element analysis verified the prevailing assumptions made in the formulation of this model. Damage growth, namely, fiber-bridged matrix crack growth, was evaluated using a fiber bridging (FB) model which accounts for thermal residual stresses. This model combines continuum fracture mechanics and micromechanics analyses yielding stress-intensity factor solutions for fiber-bridged matrix cracks. It is assumed in the FB model that fibers in the wake of the matrix crack are idealized as a closure pressure, and an unknown constant frictional shear stress is assumed to act along the debond length of the bridging fibers. This frictional shear stress was used as a curve fitting parameter to the available experimental data. Fatigue life and post-fatigue residual strength were predicted based on the axial stress in the first intact 0 degree fiber calculated using the FB model and a three-dimensional finite element analysis.

  4. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    PubMed

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  5. Characterization of Damage Accumulation in a C/SiC Composite at Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Telesman, Jack; Verrilli, Mike; Ghosn, Louis; Kantzos, Pete

    1997-01-01

    This research is part of a program aimed to evaluate and demonstrate the ability of candidate CMC materials for a variety of applications in reusable launch vehicles. The life and durability of these materials in rocket and engine applications are of major concern and there is a need to develop and validate life prediction methodology. In this study, material characterization and mechanical testing was performed in order to identify the failure modes, degradation mechanisms, and progression of damage in a C/SiC composite at elevated temperatures. The motivation for this work is to provide the relevant damage information that will form the basis for the development of a physically based life prediction methodology.

  6. An approach to software cost estimation

    NASA Technical Reports Server (NTRS)

    Mcgarry, F.; Page, J.; Card, D.; Rohleder, M.; Church, V.

    1984-01-01

    A general procedure for software cost estimation in any environment is outlined. The basic concepts of work and effort estimation are explained, some popular resource estimation models are reviewed, and the accuracy of source estimates is discussed. A software cost prediction procedure based on the experiences of the Software Engineering Laboratory in the flight dynamics area and incorporating management expertise, cost models, and historical data is described. The sources of information and relevant parameters available during each phase of the software life cycle are identified. The methodology suggested incorporates these elements into a customized management tool for software cost prediction. Detailed guidelines for estimation in the flight dynamics environment developed using this methodology are presented.

  7. A review of propeller noise prediction methodology: 1919-1994

    NASA Technical Reports Server (NTRS)

    Metzger, F. Bruce

    1995-01-01

    This report summarizes a review of the literature regarding propeller noise prediction methods. The review is divided into six sections: (1) early methods; (2) more recent methods based on earlier theory; (3) more recent methods based on the Acoustic Analogy; (4) more recent methods based on Computational Acoustics; (5) empirical methods; and (6) broadband methods. The report concludes that there are a large number of noise prediction procedures available which vary markedly in complexity. Deficiencies in accuracy of methods in many cases may be related, not to the methods themselves, but the accuracy and detail of the aerodynamic inputs used to calculate noise. The steps recommended in the report to provide accurate and easy to use prediction methods are: (1) identify reliable test data; (2) define and conduct test programs to fill gaps in the existing data base; (3) identify the most promising prediction methods; (4) evaluate promising prediction methods relative to the data base; (5) identify and correct the weaknesses in the prediction methods, including lack of user friendliness, and include features now available only in research codes; (6) confirm the accuracy of improved prediction methods to the data base; and (7) make the methods widely available and provide training in their use.

  8. Selection of organisms for the co-evolution-based study of protein interactions.

    PubMed

    Herman, Dorota; Ochoa, David; Juan, David; Lopez, Daniel; Valencia, Alfonso; Pazos, Florencio

    2011-09-12

    The prediction and study of protein interactions and functional relationships based on similarity of phylogenetic trees, exemplified by the mirrortree and related methodologies, is being widely used. Although dependence between the performance of these methods and the set of organisms used to build the trees was suspected, so far nobody assessed it in an exhaustive way, and, in general, previous works used as many organisms as possible. In this work we asses the effect of using different sets of organism (chosen according with various phylogenetic criteria) on the performance of this methodology in detecting protein interactions of different nature. We show that the performance of three mirrortree-related methodologies depends on the set of organisms used for building the trees, and it is not always directly related to the number of organisms in a simple way. Certain subsets of organisms seem to be more suitable for the predictions of certain types of interactions. This relationship between type of interaction and optimal set of organism for detecting them makes sense in the light of the phylogenetic distribution of the organisms and the nature of the interactions. In order to obtain an optimal performance when predicting protein interactions, it is recommended to use different sets of organisms depending on the available computational resources and data, as well as the type of interactions of interest.

  9. Selection of organisms for the co-evolution-based study of protein interactions

    PubMed Central

    2011-01-01

    Background The prediction and study of protein interactions and functional relationships based on similarity of phylogenetic trees, exemplified by the mirrortree and related methodologies, is being widely used. Although dependence between the performance of these methods and the set of organisms used to build the trees was suspected, so far nobody assessed it in an exhaustive way, and, in general, previous works used as many organisms as possible. In this work we asses the effect of using different sets of organism (chosen according with various phylogenetic criteria) on the performance of this methodology in detecting protein interactions of different nature. Results We show that the performance of three mirrortree-related methodologies depends on the set of organisms used for building the trees, and it is not always directly related to the number of organisms in a simple way. Certain subsets of organisms seem to be more suitable for the predictions of certain types of interactions. This relationship between type of interaction and optimal set of organism for detecting them makes sense in the light of the phylogenetic distribution of the organisms and the nature of the interactions. Conclusions In order to obtain an optimal performance when predicting protein interactions, it is recommended to use different sets of organisms depending on the available computational resources and data, as well as the type of interactions of interest. PMID:21910884

  10. Roughness Based Crossflow Transition Control: A Computational Assessment

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.

    2009-01-01

    A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.

  11. Delamination onset in polymeric composite laminates under thermal and mechanical loads

    NASA Technical Reports Server (NTRS)

    Martin, Roderick H.

    1991-01-01

    A fracture mechanics damage methodology to predict edge delamination is described. The methodology accounts for residual thermal stresses, cyclic thermal stresses, and cyclic mechanical stresses. The modeling is based on the classical lamination theory and a sublaminate theory. The prediction methodology determines the strain energy release rate, G, at the edge of a laminate and compares it with the fatigue and fracture toughness of the composite. To verify the methodology, isothermal static tests at 23, 125, and 175 C and tension-tension fatigue tests at 23 and 175 C were conducted on laminates. The material system used was a carbon/bismaleimide, IM7/5260. Two quasi-isotropic layups were used. Also, 24 ply unidirectional double cantilever beam specimens were tested to determine the fatigue and fracture toughness of the composite at different temperatures. Raising the temperature had the effect of increasing the value of G at the edge for these layups and also to lower the fatigue and fracture toughness of the composite. The static stress to edge delamination was not affected by temperature but the number of cycles to edge delamination decreased.

  12. Corridor-based forecasts of work-zone impacts for freeways.

    DOT National Transportation Integrated Search

    2011-08-09

    This project developed an analysis methodology and associated software implementation for the evaluation of : significant work zone impacts on freeways in North Carolina. The FREEVAL-WZ software tool allows the analyst : to predict the operational im...

  13. Prediction of Scar Size in Rats Six Months after Burns Based on Early Post-injury Polarization-Sensitive Optical Frequency Domain Imaging

    PubMed Central

    Kravez, Eli; Villiger, Martin; Bouma, Brett; Yarmush, Martin; Yakhini, Zohar; Golberg, Alexander

    2017-01-01

    Hypertrophic scars remain a major clinical problem in the rehabilitation of burn survivors and lead to physical, aesthetic, functional, psychological, and social stresses. Prediction of healing outcome and scar formation is critical for deciding on the best treatment plan. Both subjective and objective scales have been devised to assess scar severity. Whereas scales of the first type preclude cross-comparison between observers, those of the second type are based on imaging modalities that either lack the ability to image individual layers of the scar or only provide very limited fields of view. To overcome these deficiencies, this work aimed at developing a predictive model of scar formation based on polarization sensitive optical frequency domain imaging (PS-OFDI), which offers comprehensive subsurface imaging. We report on a linear regression model that predicts the size of a scar 6 months after third-degree burn injuries in rats based on early post-injury PS-OFDI and measurements of scar area. When predicting the scar area at month 6 based on the homogeneity and the degree of polarization (DOP), which are signatures derived from the PS-OFDI signal, together with the scar area measured at months 2 and 3, we achieved predictions with a Pearson coefficient of 0.57 (p < 10−4) and a Spearman coefficient of 0.66 (p < 10−5), which were significant in comparison to prediction models trained on randomly shuffled data. As the model in this study was developed on the rat burn model, the methodology can be used in larger studies that are more relevant to humans; however, the actual model inferred herein is not translatable. Nevertheless, our analysis and modeling methodology can be extended to perform larger wound healing studies in different contexts. This study opens new possibilities for quantitative and objective assessment of scar severity that could help to determine the optimal course of therapy. PMID:29249978

  14. Analytical and simulator study of advanced transport

    NASA Technical Reports Server (NTRS)

    Levison, W. H.; Rickard, W. W.

    1982-01-01

    An analytic methodology, based on the optimal-control pilot model, was demonstrated for assessing longitidunal-axis handling qualities of transport aircraft in final approach. Calibration of the methodology is largely in terms of closed-loop performance requirements, rather than specific vehicle response characteristics, and is based on a combination of published criteria, pilot preferences, physical limitations, and engineering judgment. Six longitudinal-axis approach configurations were studied covering a range of handling qualities problems, including the presence of flexible aircraft modes. The analytical procedure was used to obtain predictions of Cooper-Harper ratings, a solar quadratic performance index, and rms excursions of important system variables.

  15. Probabilistic fatigue life prediction of metallic and composite materials

    NASA Astrophysics Data System (ADS)

    Xiang, Yibing

    Fatigue is one of the most common failure modes for engineering structures, such as aircrafts, rotorcrafts and aviation transports. Both metallic materials and composite materials are widely used and affected by fatigue damage. Huge uncertainties arise from material properties, measurement noise, imperfect models, future anticipated loads and environmental conditions. These uncertainties are critical issues for accurate remaining useful life (RUL) prediction for engineering structures in service. Probabilistic fatigue prognosis considering various uncertainties is of great importance for structural safety. The objective of this study is to develop probabilistic fatigue life prediction models for metallic materials and composite materials. A fatigue model based on crack growth analysis and equivalent initial flaw size concept is proposed for metallic materials. Following this, the developed model is extended to include structural geometry effects (notch effect), environmental effects (corroded specimens) and manufacturing effects (shot peening effects). Due to the inhomogeneity and anisotropy, the fatigue model suitable for metallic materials cannot be directly applied to composite materials. A composite fatigue model life prediction is proposed based on a mixed-mode delamination growth model and a stiffness degradation law. After the development of deterministic fatigue models of metallic and composite materials, a general probabilistic life prediction methodology is developed. The proposed methodology combines an efficient Inverse First-Order Reliability Method (IFORM) for the uncertainty propogation in fatigue life prediction. An equivalent stresstransformation has been developed to enhance the computational efficiency under realistic random amplitude loading. A systematical reliability-based maintenance optimization framework is proposed for fatigue risk management and mitigation of engineering structures.

  16. Predictions of CD4 lymphocytes’ count in HIV patients from complete blood count

    PubMed Central

    2013-01-01

    Background HIV diagnosis, prognostic and treatment requires T CD4 lymphocytes’ number from flow cytometry, an expensive technique often not available to people in developing countries. The aim of this work is to apply a previous developed methodology that predicts T CD4 lymphocytes’ value based on total white blood cell (WBC) count and lymphocytes count applying sets theory, from information taken from the Complete Blood Count (CBC). Methods Sets theory was used to classify into groups named A, B, C and D the number of leucocytes/mm3, lymphocytes/mm3, and CD4/μL3 subpopulation per flow cytometry of 800 HIV diagnosed patients. Union between sets A and C, and B and D were assessed, and intersection between both unions was described in order to establish the belonging percentage to these sets. Results were classified into eight ranges taken by 1000 leucocytes/mm3, calculating the belonging percentage of each range with respect to the whole sample. Results Intersection (A ∪ C) ∩ (B ∪ D) showed an effectiveness in the prediction of 81.44% for the range between 4000 and 4999 leukocytes, 91.89% for the range between 3000 and 3999, and 100% for the range below 3000. Conclusions Usefulness and clinical applicability of a methodology based on sets theory were confirmed to predict the T CD4 lymphocytes’ value, beginning with WBC and lymphocytes’ count from CBC. This methodology is new, objective, and has lower costs than the flow cytometry which is currently considered as Gold Standard. PMID:24034560

  17. A modelling framework to predict bat activity patterns on wind farms: An outline of possible applications on mountain ridges of North Portugal.

    PubMed

    Silva, Carmen; Cabral, João Alexandre; Hughes, Samantha Jane; Santos, Mário

    2017-03-01

    Worldwide ecological impact assessments of wind farms have gathered relevant information on bat activity patterns. Since conventional bat study methods require intensive field work, the prediction of bat activity might prove useful by anticipating activity patterns and estimating attractiveness concomitant with the wind farm location. A novel framework was developed, based on the stochastic dynamic methodology (StDM) principles, to predict bat activity on mountain ridges with wind farms. We illustrate the framework application using regional data from North Portugal by merging information from several environmental monitoring programmes associated with diverse wind energy facilities that enable integrating the multifactorial influences of meteorological conditions, land cover and geographical variables on bat activity patterns. Output from this innovative methodology can anticipate episodes of exceptional bat activity, which, if correlated with collision probability, can be used to guide wind farm management strategy such as halting wind turbines during hazardous periods. If properly calibrated with regional gradients of environmental variables from mountain ridges with windfarms, the proposed methodology can be used as a complementary tool in environmental impact assessments and ecological monitoring, using predicted bat activity to assist decision making concerning the future location of wind farms and the implementation of effective mitigation measures. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. A Comparison of Hyperelastic Warping of PET Images with Tagged MRI for the Analysis of Cardiac Deformation

    DOE PAGES

    Veress, Alexander I.; Klein, Gregory; Gullberg, Grant T.

    2013-01-01

    Tmore » he objectives of the following research were to evaluate the utility of a deformable image registration technique known as hyperelastic warping for the measurement of local strains in the left ventricle through the analysis of clinical, gated PE image datasets. wo normal human male subjects were sequentially imaged with PE and tagged MRI imaging. Strain predictions were made for systolic contraction using warping analyses of the PE images and HARP based strain analyses of the MRI images. Coefficient of determination R 2 values were computed for the comparison of circumferential and radial strain predictions produced by each methodology. here was good correspondence between the methodologies, with R 2 values of 0.78 for the radial strains of both hearts and from an R 2 = 0.81 and R 2 = 0.83 for the circumferential strains. he strain predictions were not statistically different ( P ≤ 0.01 ) . A series of sensitivity results indicated that the methodology was relatively insensitive to alterations in image intensity, random image noise, and alterations in fiber structure. his study demonstrated that warping was able to provide strain predictions of systolic contraction of the LV consistent with those provided by tagged MRI Warping.« less

  19. Predicting pedestrian flow: a methodology and a proof of concept based on real-life data.

    PubMed

    Davidich, Maria; Köster, Gerta

    2013-01-01

    Building a reliable predictive model of pedestrian motion is very challenging: Ideally, such models should be based on observations made in both controlled experiments and in real-world environments. De facto, models are rarely based on real-world observations due to the lack of available data; instead, they are largely based on intuition and, at best, literature values and laboratory experiments. Such an approach is insufficient for reliable simulations of complex real-life scenarios: For instance, our analysis of pedestrian motion under natural conditions at a major German railway station reveals that the values for free-flow velocities and the flow-density relationship differ significantly from widely used literature values. It is thus necessary to calibrate and validate the model against relevant real-life data to make it capable of reproducing and predicting real-life scenarios. In this work we aim at constructing such realistic pedestrian stream simulation. Based on the analysis of real-life data, we present a methodology that identifies key parameters and interdependencies that enable us to properly calibrate the model. The success of the approach is demonstrated for a benchmark model, a cellular automaton. We show that the proposed approach significantly improves the reliability of the simulation and hence the potential prediction accuracy. The simulation is validated by comparing the local density evolution of the measured data to that of the simulated data. We find that for our model the most sensitive parameters are: the source-target distribution of the pedestrian trajectories, the schedule of pedestrian appearances in the scenario and the mean free-flow velocity. Our results emphasize the need for real-life data extraction and analysis to enable predictive simulations.

  20. A Long Short-Term Memory deep learning network for the prediction of epileptic seizures using EEG signals.

    PubMed

    Tsiouris, Κostas Μ; Pezoulas, Vasileios C; Zervakis, Michalis; Konitsiotis, Spiros; Koutsouris, Dimitrios D; Fotiadis, Dimitrios I

    2018-05-17

    The electroencephalogram (EEG) is the most prominent means to study epilepsy and capture changes in electrical brain activity that could declare an imminent seizure. In this work, Long Short-Term Memory (LSTM) networks are introduced in epileptic seizure prediction using EEG signals, expanding the use of deep learning algorithms with convolutional neural networks (CNN). A pre-analysis is initially performed to find the optimal architecture of the LSTM network by testing several modules and layers of memory units. Based on these results, a two-layer LSTM network is selected to evaluate seizure prediction performance using four different lengths of preictal windows, ranging from 15 min to 2 h. The LSTM model exploits a wide range of features extracted prior to classification, including time and frequency domain features, between EEG channels cross-correlation and graph theoretic features. The evaluation is performed using long-term EEG recordings from the open CHB-MIT Scalp EEG database, suggest that the proposed methodology is able to predict all 185 seizures, providing high rates of seizure prediction sensitivity and low false prediction rates (FPR) of 0.11-0.02 false alarms per hour, depending on the duration of the preictal window. The proposed LSTM-based methodology delivers a significant increase in seizure prediction performance compared to both traditional machine learning techniques and convolutional neural networks that have been previously evaluated in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Optical and laser spectroscopic diagnostics for energy applications

    NASA Astrophysics Data System (ADS)

    Tripathi, Markandey Mani

    The continuing need for greater energy security and energy independence has motivated researchers to develop new energy technologies for better energy resource management and efficient energy usage. The focus of this dissertation is the development of optical (spectroscopic) sensing methodologies for various fuels, and energy applications. A fiber-optic NIR sensing methodology was developed for predicting water content in bio-oil. The feasibility of using the designed near infrared (NIR) system for estimating water content in bio-oil was tested by applying multivariate analysis to NIR spectral data. The calibration results demonstrated that the spectral information can successfully predict the bio-oil water content (from 16% to 36%). The effect of ultraviolet (UV) light on the chemical stability of bio-oil was studied by employing laser-induced fluorescence (LIF) spectroscopy. To simulate the UV light exposure, a laser in the UV region (325 nm) was employed for bio-oil excitation. The LIF, as a signature of chemical change, was recorded from bio-oil. From this study, it was concluded that phenols present in the bio-oil show chemical instability, when exposed to UV light. A laser-induced breakdown spectroscopy (LIBS)-based optical sensor was designed, developed, and tested for detection of four important trace impurities in rocket fuel (hydrogen). The sensor can simultaneously measure the concentrations of nitrogen, argon, oxygen, and helium in hydrogen from storage tanks and supply lines. The sensor had estimated lower detection limits of 80 ppm for nitrogen, 97 ppm for argon, 10 ppm for oxygen, and 25 ppm for helium. A chemiluminescence-based spectroscopic diagnostics were performed to measure equivalence ratios in methane-air premixed flames. A partial least-squares regression (PLS-R)-based multivariate sensing methodology was investigated. It was found that the equivalence ratios predicted with the PLS-R-based multivariate calibration model matched with the experimentally measured equivalence ratios within 7 %. A comparative study was performed for equivalence ratios measurement in atmospheric premixed methane-air flames with ungated LIBS and chemiluminescence spectroscopy. It was reported that LIBS-based calibration, which carries spectroscopic information from a "point-like-volume," provides better predictions of equivalence ratios compared to chemiluminescence-based calibration, which is essentially a "line-of-sight" measurement.

  2. Report on an Assessment of the Application of EPP Results from the Strain Limit Evaluation Procedure to the Prediction of Cyclic Life Based on the SMT Methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jetter, R. I.; Messner, M. C.; Sham, T. -L.

    The goal of the proposed integrated Elastic Perfectly-Plastic (EPP) and Simplified Model Test (SMT) methodology is to incorporate an SMT data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. This methodology should minimize over-conservatism while properly accounting for localized defects and stress risers. To support the implementation of the proposed methodology and to verify the applicability of the code rules, analytical studies and evaluation of thermomechanical test results continuedmore » in FY17. This report presents the results of those studies. An EPP strain limits methodology assessment was based on recent two-bar thermal ratcheting test results on 316H stainless steel in the temperature range of 405 to 7050C. Strain range predictions from the EPP evaluation of the two-bar tests were also evaluated and compared with the experimental results. The role of sustained primary loading on cyclic life was assessed using the results of pressurized SMT data from tests on Alloy 617 at 9500C. A viscoplastic material model was used in an analytic simulation of two-bar tests to compare with EPP strain limits assessments using isochronous stress strain curves that are consistent with the viscoplastic material model. A finite element model of a prior 304H stainless steel Oak Ridge National Laboratory (ORNL) nozzle-to-sphere test was developed and used for an EPP strain limits and creep-fatigue code case damage evaluations. A theoretical treatment of a recurring issue with convergence criteria for plastic shakedown illustrated the role of computer machine precision in EPP calculations.« less

  3. Conventionalism and Methodological Standards in Contending with Skepticism about Uncertainty

    NASA Astrophysics Data System (ADS)

    Brumble, K. C.

    2012-12-01

    What it means to measure and interpret confidence and uncertainty in a result is often particular to a specific scientific community and its methodology of verification. Additionally, methodology in the sciences varies greatly across disciplines and scientific communities. Understanding the accuracy of predictions of a particular science thus depends largely upon having an intimate working knowledge of the methods, standards, and conventions utilized and underpinning discoveries in that scientific field. Thus, valid criticism of scientific predictions and discoveries must be conducted by those who are literate in the field in question: they must have intimate working knowledge of the methods of the particular community and of the particular research under question. The interpretation and acceptance of uncertainty is one such shared, community-based convention. In the philosophy of science, this methodological and community-based way of understanding scientific work is referred to as conventionalism. By applying the conventionalism of historian and philosopher of science Thomas Kuhn to recent attacks upon methods of multi-proxy mean temperature reconstructions, I hope to illuminate how climate skeptics and their adherents fail to appreciate the need for community-based fluency in the methodological standards for understanding uncertainty shared by the wider climate science community. Further, I will flesh out a picture of climate science community standards of evidence and statistical argument following the work of philosopher of science Helen Longino. I will describe how failure to appreciate the conventions of professionalism and standards of evidence accepted in the climate science community results in the application of naïve falsification criteria. Appeal to naïve falsification in turn has allowed scientists outside the standards and conventions of the mainstream climate science community to consider themselves and to be judged by climate skeptics as valid critics of particular statistical reconstructions with naïve and misapplied methodological criticism. Examples will include the skeptical responses to multi-proxy mean temperature reconstructions and congressional hearings criticizing the work of Michael Mann et al.'s Hockey Stick.

  4. Application of a GIS-/remote sensing-based approach for predicting groundwater potential zones using a multi-criteria data mining methodology.

    PubMed

    Mogaji, Kehinde Anthony; Lim, Hwee San

    2017-07-01

    This study integrates the application of Dempster-Shafer-driven evidential belief function (DS-EBF) methodology with remote sensing and geographic information system techniques to analyze surface and subsurface data sets for the spatial prediction of groundwater potential in Perak Province, Malaysia. The study used additional data obtained from the records of the groundwater yield rate of approximately 28 bore well locations. The processed surface and subsurface data produced sets of groundwater potential conditioning factors (GPCFs) from which multiple surface hydrologic and subsurface hydrogeologic parameter thematic maps were generated. The bore well location inventories were partitioned randomly into a ratio of 70% (19 wells) for model training to 30% (9 wells) for model testing. Application results of the DS-EBF relationship model algorithms of the surface- and subsurface-based GPCF thematic maps and the bore well locations produced two groundwater potential prediction (GPP) maps based on surface hydrologic and subsurface hydrogeologic characteristics which established that more than 60% of the study area falling within the moderate-high groundwater potential zones and less than 35% falling within the low potential zones. The estimated uncertainty values within the range of 0 to 17% for the predicted potential zones were quantified using the uncertainty algorithm of the model. The validation results of the GPP maps using relative operating characteristic curve method yielded 80 and 68% success rates and 89 and 53% prediction rates for the subsurface hydrogeologic factor (SUHF)- and surface hydrologic factor (SHF)-based GPP maps, respectively. The study results revealed that the SUHF-based GPP map accurately delineated groundwater potential zones better than the SHF-based GPP map. However, significant information on the low degree of uncertainty of the predicted potential zones established the suitability of the two GPP maps for future development of groundwater resources in the area. The overall results proved the efficacy of the data mining model and the geospatial technology in groundwater potential mapping.

  5. Predictive information processing in music cognition. A critical review.

    PubMed

    Rohrmeier, Martin A; Koelsch, Stefan

    2012-02-01

    Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Molecular Dynamics Simulations and Kinetic Measurements to Estimate and Predict Protein-Ligand Residence Times.

    PubMed

    Mollica, Luca; Theret, Isabelle; Antoine, Mathias; Perron-Sierra, Françoise; Charton, Yves; Fourquez, Jean-Marie; Wierzbicki, Michel; Boutin, Jean A; Ferry, Gilles; Decherchi, Sergio; Bottegoni, Giovanni; Ducrot, Pierre; Cavalli, Andrea

    2016-08-11

    Ligand-target residence time is emerging as a key drug discovery parameter because it can reliably predict drug efficacy in vivo. Experimental approaches to binding and unbinding kinetics are nowadays available, but we still lack reliable computational tools for predicting kinetics and residence time. Most attempts have been based on brute-force molecular dynamics (MD) simulations, which are CPU-demanding and not yet particularly accurate. We recently reported a new scaled-MD-based protocol, which showed potential for residence time prediction in drug discovery. Here, we further challenged our procedure's predictive ability by applying our methodology to a series of glucokinase activators that could be useful for treating type 2 diabetes mellitus. We combined scaled MD with experimental kinetics measurements and X-ray crystallography, promptly checking the protocol's reliability by directly comparing computational predictions and experimental measures. The good agreement highlights the potential of our scaled-MD-based approach as an innovative method for computationally estimating and predicting drug residence times.

  7. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  8. The energetic cost of walking: a comparison of predictive methods.

    PubMed

    Kramer, Patricia Ann; Sylvester, Adam D

    2011-01-01

    The energy that animals devote to locomotion has been of intense interest to biologists for decades and two basic methodologies have emerged to predict locomotor energy expenditure: those based on metabolic and those based on mechanical energy. Metabolic energy approaches share the perspective that prediction of locomotor energy expenditure should be based on statistically significant proxies of metabolic function, while mechanical energy approaches, which derive from many different perspectives, focus on quantifying the energy of movement. Some controversy exists as to which mechanical perspective is "best", but from first principles all mechanical methods should be equivalent if the inputs to the simulation are of similar quality. Our goals in this paper are 1) to establish the degree to which the various methods of calculating mechanical energy are correlated, and 2) to investigate to what degree the prediction methods explain the variation in energy expenditure. We use modern humans as the model organism in this experiment because their data are readily attainable, but the methodology is appropriate for use in other species. Volumetric oxygen consumption and kinematic and kinetic data were collected on 8 adults while walking at their self-selected slow, normal and fast velocities. Using hierarchical statistical modeling via ordinary least squares and maximum likelihood techniques, the predictive ability of several metabolic and mechanical approaches were assessed. We found that all approaches are correlated and that the mechanical approaches explain similar amounts of the variation in metabolic energy expenditure. Most methods predict the variation within an individual well, but are poor at accounting for variation between individuals. Our results indicate that the choice of predictive method is dependent on the question(s) of interest and the data available for use as inputs. Although we used modern humans as our model organism, these results can be extended to other species.

  9. Management of health care expenditure by soft computing methodology

    NASA Astrophysics Data System (ADS)

    Maksimović, Goran; Jović, Srđan; Jovanović, Radomir; Aničić, Obrad

    2017-01-01

    In this study was managed the health care expenditure by soft computing methodology. The main goal was to predict the gross domestic product (GDP) according to several factors of health care expenditure. Soft computing methodologies were applied since GDP prediction is very complex task. The performances of the proposed predictors were confirmed with the simulation results. According to the results, support vector regression (SVR) has better prediction accuracy compared to other soft computing methodologies. The soft computing methods benefit from the soft computing capabilities of global optimization in order to avoid local minimum issues.

  10. Modeling Students' Problem Solving Performance in the Computer-Based Mathematics Learning Environment

    ERIC Educational Resources Information Center

    Lee, Young-Jin

    2017-01-01

    Purpose: The purpose of this paper is to develop a quantitative model of problem solving performance of students in the computer-based mathematics learning environment. Design/methodology/approach: Regularized logistic regression was used to create a quantitative model of problem solving performance of students that predicts whether students can…

  11. Logic-based models in systems biology: a predictive and parameter-free network analysis method†

    PubMed Central

    Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.

    2012-01-01

    Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820

  12. Minimum number of measurements for evaluating Bertholletia excelsa.

    PubMed

    Baldoni, A B; Tonini, H; Tardin, F D; Botelho, S C C; Teodoro, P E

    2017-09-27

    Repeatability studies on fruit species are of great importance to identify the minimum number of measurements necessary to accurately select superior genotypes. This study aimed to identify the most efficient method to estimate the repeatability coefficient (r) and predict the minimum number of measurements needed for a more accurate evaluation of Brazil nut tree (Bertholletia excelsa) genotypes based on fruit yield. For this, we assessed the number of fruits and dry mass of seeds of 75 Brazil nut genotypes, from native forest, located in the municipality of Itaúba, MT, for 5 years. To better estimate r, four procedures were used: analysis of variance (ANOVA), principal component analysis based on the correlation matrix (CPCOR), principal component analysis based on the phenotypic variance and covariance matrix (CPCOV), and structural analysis based on the correlation matrix (mean r - AECOR). There was a significant effect of genotypes and measurements, which reveals the need to study the minimum number of measurements for selecting superior Brazil nut genotypes for a production increase. Estimates of r by ANOVA were lower than those observed with the principal component methodology and close to AECOR. The CPCOV methodology provided the highest estimate of r, which resulted in a lower number of measurements needed to identify superior Brazil nut genotypes for the number of fruits and dry mass of seeds. Based on this methodology, three measurements are necessary to predict the true value of the Brazil nut genotypes with a minimum accuracy of 85%.

  13. Methodology for Uncertainty Analysis of Dynamic Computational Toxicology Models

    EPA Science Inventory

    The task of quantifying the uncertainty in both parameter estimates and model predictions has become more important with the increased use of dynamic computational toxicology models by the EPA. Dynamic toxicological models include physiologically-based pharmacokinetic (PBPK) mode...

  14. Neural network based short-term load forecasting using weather compensation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, T.W.S.; Leung, C.T.

    This paper presents a novel technique for electric load forecasting based on neural weather compensation. The proposed method is a nonlinear generalization of Box and Jenkins approach for nonstationary time-series prediction. A weather compensation neural network is implemented for one-day ahead electric load forecasting. The weather compensation neural network can accurately predict the change of actual electric load consumption from the previous day. The results, based on Hong Kong Island historical load demand, indicate that this methodology is capable of providing a more accurate load forecast with a 0.9% reduction in forecast error.

  15. The Biopharmaceutics Classification System: subclasses for in vivo predictive dissolution (IPD) methodology and IVIVC.

    PubMed

    Tsume, Yasuhiro; Mudie, Deanna M; Langguth, Peter; Amidon, Greg E; Amidon, Gordon L

    2014-06-16

    The Biopharmaceutics Classification System (BCS) has found widespread utility in drug discovery, product development and drug product regulatory sciences. The classification scheme captures the two most significant factors influencing oral drug absorption; solubility and intestinal permeability and it has proven to be a very useful and a widely accepted starting point for drug product development and drug product regulation. The mechanistic base of the BCS approach has, no doubt, contributed to its wide spread acceptance and utility. Nevertheless, underneath the simplicity of BCS are many detailed complexities, both in vitro and in vivo which must be evaluated and investigated for any given drug and drug product. In this manuscript we propose a simple extension of the BCS classes to include sub-specification of acid (a), base (b) and neutral (c) for classes II and IV. Sub-classification for Classes I and III (high solubility drugs as currently defined) is generally not needed except perhaps in border line solubility cases. It is well known that the , pKa physical property of a drug (API) has a significant impact on the aqueous solubility dissolution of drug from the drug product both in vitro and in vivo for BCS Class II and IV acids and bases, and is the basis, we propose for a sub-classification extension of the original BCS classification. This BCS sub-classification is particularly important for in vivo predictive dissolution methodology development due to the complex and variable in vivo environment in the gastrointestinal tract, with its changing pH, buffer capacity, luminal volume, surfactant luminal conditions, permeability profile along the gastrointestinal tract and variable transit and fasted and fed states. We believe this sub-classification is a step toward developing a more science-based mechanistic in vivo predictive dissolution (IPD) methodology. Such a dissolution methodology can be used by development scientists to assess the likelihood of a formulation and dosage form functioning as desired in humans, can be optimized along with parallel human pharmacokinetic studies to set a dissolution methodology for Quality by Design (QbD) and in vitro-in vivo correlations (IVIVC) and ultimately can be used as a basis for a dissolution standard that will ensure continued in vivo product performance. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. The Biopharmaceutics Classification System: Subclasses for in vivo predictive dissolution (IPD) methodology and IVIVC

    PubMed Central

    Tsume, Yasuhiro; Mudie, Deanna M.; Langguth, Peter; Amidon, Greg E.; Amidon, Gordon L.

    2014-01-01

    The Biopharmaceutics Classification System (BCS) has found widespread utility in drug discovery, product development and drug product regulatory sciences. The classification scheme captures the two most significant factors influencing oral drug absorption; solubility and intestinal permeability and it has proven to be a very useful and a widely accepted starting point for drug product development and drug product regulation. The mechanistic base of the BCS approach has, no doubt, contributed to its wide spread acceptance and utility. Nevertheless, underneath the simplicity of BCS are many detailed complexities, both in vitro and in vivo which must be evaluated and investigated for any given drug and drug product. In this manuscript we propose a simple extension of the BCS classes to include subspecification of acid (a), base (b) and neutral (c) for classes II and IV. Sub-classification for Classes I and III (high solubility drugs as currently defined) is generally not needed except perhaps in border line solubility cases. It is well known that the , pKa physical property of a drug (API) has a significant impact on the aqueous solubility dissolution of drug from the drug product both in vitro and in vivo for BCS Class II and IV acids and bases, and is the basis, we propose for a sub-classification extension of the original BCS classification. This BCS sub-classification is particularly important for in vivo predictive dissolution methodology development due to the complex and variable in vivo environment in the gastrointestinal tract, with its changing pH, buffer capacity, luminal volume, surfactant luminal conditions, permeability profile along the gastrointestinal tract and variable transit and fasted and fed states. We believe this sub-classification is a step toward developing a more science-based mechanistic in vivo predictive dissolution (IPD) methodology. Such a dissolution methodology can be used by development scientists to assess the likelihood of a formulation and dosage form functioning as desired in humans, can be optimized along with parallel human pharmacokinetic studies to set a dissolution methodology for Quality by Design (QbD) and in vitro–in vivo correlations (IVIVC) and ultimately can be used as a basis for a dissolution standard that will ensure continued in vivo product performance. PMID:24486482

  17. Seismo-induced effects in the near-earth space: Combined ground and space investigations as a contribution to earthquake prediction

    NASA Astrophysics Data System (ADS)

    Sgrigna, V.; Buzzi, A.; Conti, L.; Picozza, P.; Stagni, C.; Zilpimiani, D.

    2007-02-01

    The paper aims at giving a few methodological suggestions in deterministic earthquake prediction studies based on combined ground-based and space observations of earthquake precursors. Up to now what is lacking is the demonstration of a causal relationship with explained physical processes and looking for a correlation between data gathered simultaneously and continuously by space observations and ground-based measurements. Coordinated space and ground-based observations imply available test sites on the Earth surface to correlate ground data, collected by appropriate networks of instruments, with space ones detected on board of LEO satellites. At this purpose a new result reported in the paper is an original and specific space mission project (ESPERIA) and two instruments of its payload. The ESPERIA space project has been performed for the Italian Space Agency and three ESPERIA instruments (ARINA and LAZIO particle detectors, and EGLE search-coil magnetometer) have been built and tested in space. The EGLE experiment started last April 15, 2005 on board the ISS, within the ENEIDE mission. The launch of ARINA occurred on June 15, 2006, on board the RESURS DK-1 Russian LEO satellite. As an introduction and justification to these experiments the paper clarifies some basic concepts and critical methodological aspects concerning deterministic and statistic approaches and their use in earthquake prediction. We also take the liberty of giving the scientific community a few critical hints based on our personal experience in the field and propose a joint study devoted to earthquake prediction and warning.

  18. Entropy Filtered Density Function for Large Eddy Simulation of Turbulent Reacting Flows

    NASA Astrophysics Data System (ADS)

    Safari, Mehdi

    Analysis of local entropy generation is an effective means to optimize the performance of energy and combustion systems by minimizing the irreversibilities in transport processes. Large eddy simulation (LES) is employed to describe entropy transport and generation in turbulent reacting flows. The entropy transport equation in LES contains several unclosed terms. These are the subgrid scale (SGS) entropy flux and entropy generation caused by irreversible processes: heat conduction, mass diffusion, chemical reaction and viscous dissipation. The SGS effects are taken into account using a novel methodology based on the filtered density function (FDF). This methodology, entitled entropy FDF (En-FDF), is developed and utilized in the form of joint entropy-velocity-scalar-turbulent frequency FDF and the marginal scalar-entropy FDF, both of which contain the chemical reaction effects in a closed form. The former constitutes the most comprehensive form of the En-FDF and provides closure for all the unclosed filtered moments. This methodology is applied for LES of a turbulent shear layer involving transport of passive scalars. Predictions show favor- able agreements with the data generated by direct numerical simulation (DNS) of the same layer. The marginal En-FDF accounts for entropy generation effects as well as scalar and entropy statistics. This methodology is applied to a turbulent nonpremixed jet flame (Sandia Flame D) and predictions are validated against experimental data. In both flows, sources of irreversibility are predicted and analyzed.

  19. Predictive Methodology for Delamination Growth in Laminated Composites Part 1: Theoretical Development and Preliminary Experimental Results

    DOT National Transportation Integrated Search

    1998-04-01

    A methodology is presented for the prediction of delamination growth in laminated structures. The methodology is aimed at overcoming computational difficulties in the determination of energy release rate and mode mix. It also addresses the issue that...

  20. Computational Simulation of Continuous Fiber-Reinforced Ceramic Matrix Composites Behavior

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.; Chamis, Christos C.; Mital, Subodh K.

    1996-01-01

    This report describes a methodology which predicts the behavior of ceramic matrix composites and has been incorporated in the computational tool CEMCAN (CEramic Matrix Composite ANalyzer). The approach combines micromechanics with a unique fiber substructuring concept. In this new concept, the conventional unit cell (the smallest representative volume element of the composite) of the micromechanics approach is modified by substructuring it into several slices and developing the micromechanics-based equations at the slice level. The methodology also takes into account nonlinear ceramic matrix composite (CMC) behavior due to temperature and the fracture initiation and progression. Important features of the approach and its effectiveness are described by using selected examples. Comparisons of predictions and limited experimental data are also provided.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foley, M.G.; Zimmerman, D.A.; Doesburg, J.M.

    A literature review was conducted to identify methodologies that could be used to interpret paleohydrologic environments. Paleohydrology is the study of past hydrologic systems or of the past behavior of an existing hydrologic system. The purpose of the review was to evaluate how well these methodologies could be applied to the siting of low-level radioactive waste facilities. The computer literature search queried five bibliographical data bases containing over five million citations of technical journals, books, conference papers, and reports. Two data-base searches (United States Geological Survey - USGS) and a manual search were also conducted. The methodologies were examined formore » data requirements and sensitivity limits. Paleohydrologic interpretations are uncertain because of the effects of time on hydrologic and geologic systems and because of the complexity of fluvial systems. Paleoflow determinations appear in many cases to be order-of-magnitude estimates. However, the methodologies identified in this report mitigate this uncertainty when used collectively as well as independently. That is, the data from individual methodologies can be compared or combined to corroborate hydrologic predictions. In this manner, paleohydrologic methodologies are viable tools to assist in evaluating the likely future hydrology of low-level radioactive waste sites.« less

  2. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    PubMed

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  3. First principles crystal engineering of nonlinear optical materials. I. Prototypical case of urea

    NASA Astrophysics Data System (ADS)

    Masunov, Artëm E.; Tannu, Arman; Dyakov, Alexander A.; Matveeva, Anastasia D.; Freidzon, Alexandra Ya.; Odinokov, Alexey V.; Bagaturyants, Alexander A.

    2017-06-01

    The crystalline materials with nonlinear optical (NLO) properties are critically important for several technological applications, including nanophotonic and second harmonic generation devices. Urea is often considered to be a standard NLO material, due to the combination of non-centrosymmetric crystal packing and capacity for intramolecular charge transfer. Various approaches to crystal engineering of non-centrosymmetric molecular materials were reported in the literature. Here we propose using global lattice energy minimization to predict the crystal packing from the first principles. We developed a methodology that includes the following: (1) parameter derivation for polarizable force field AMOEBA; (2) local minimizations of crystal structures with these parameters, combined with the evolutionary algorithm for a global minimum search, implemented in program USPEX; (3) filtering out duplicate polymorphs produced; (4) reoptimization and final ranking based on density functional theory (DFT) with many-body dispersion (MBD) correction; and (5) prediction of the second-order susceptibility tensor by finite field approach. This methodology was applied to predict virtual urea polymorphs. After filtering based on packing similarity, only two distinct packing modes were predicted: one experimental and one hypothetical. DFT + MBD ranking established non-centrosymmetric crystal packing as the global minimum, in agreement with the experiment. Finite field approach was used to predict nonlinear susceptibility, and H-bonding was found to account for a 2.5-fold increase in molecular hyperpolarizability to the bulk value.

  4. Effects of surface chemistry on hot corrosion life

    NASA Technical Reports Server (NTRS)

    Fryxell, R. E.; Gupta, B. K.

    1984-01-01

    Hot corrosion life prediction methodology based on a combination of laboratory test data and field service turbine components, which show evidence of hot corrosion, were examined. Components were evaluated by optical metallography, scanning electron microscopy (SEM), and electron micropulse (EMP) examination.

  5. Damage tolerance and arrest characteristics of pressurized graphite/epoxy tape cylinders

    NASA Technical Reports Server (NTRS)

    Ranniger, Claudia U.; Lagace, Paul A.; Graves, Michael J.

    1993-01-01

    An investigation of the damage tolerance and damage arrest characteristics of internally-pressurized graphite/epoxy tape cylinders with axial notches was conducted. An existing failure prediction methodology, developed and verified for quasi-isotropic graphite/epoxy fabric cylinders, was investigated for applicability to general tape layups. In addition, the effect of external circumferential stiffening bands on the direction of fracture path propagation and possible damage arrest was examined. Quasi-isotropic (90/0/plus or minus 45)s and structurally anisotropic (plus or minus 45/0)s and (plus or minus 45/90)s coupons and cylinders were constructed from AS4/3501-6 graphite/epoxy tape. Notched and unnotched coupons were tested in tension and the data correlated using the equation of Mar and Lin. Cylinders with through-thickness axial slits were pressurized to failure achieving a far-field two-to-one biaxial stress state. Experimental failure pressures of the (90/0/plus or minus 45)s cylinders agreed with predicted values for all cases but the specimen with the smallest slit. However, the failure pressures of the structurally anisotropic cylinders, (plus or minus 45/0)s and (plus or minus 45/90)s, were above the values predicted utilizing the predictive methodology in all cases. Possible factors neglected by the predictive methodology include structural coupling in the laminates and axial loading of the cylindrical specimens. Furthermore, applicability of the predictive methodology depends on the similarity of initial fracture modes in the coupon specimens and the cylinder specimens of the same laminate type. The existence of splitting which may be exacerbated by the axial loading in the cylinders, shows that this condition is not always met. The circumferential stiffeners were generally able to redirect fracture propagation from longitudinal to circumferential. A quantitative assessment for stiffener effectiveness in containing the fracture, based on cylinder radius, slit size, and bending stiffnesses of the laminates, is proposed.

  6. Assessment of precursory information in seismo-electromagnetic phenomena

    NASA Astrophysics Data System (ADS)

    Han, P.; Hattori, K.; Zhuang, J.

    2017-12-01

    Previous statistical studies showed that there were correlations between seismo-electromagnetic phenomena and sizeable earthquakes in Japan. In this study, utilizing Molchan's error diagram, we evaluate whether these phenomena contain precursory information and discuss how they can be used in short-term forecasting of large earthquake events. In practice, for given series of precursory signals and related earthquake events, each prediction strategy is characterized by the leading time of alarms, the length of alarm window, the alarm radius (area) and magnitude. The leading time is the time length between a detected anomaly and its following alarm, and the alarm window is the duration that an alarm lasts. The alarm radius and magnitude are maximum predictable distance and minimum predictable magnitude of earthquake events, respectively. We introduce the modified probability gain (PG') and the probability difference (D') to quantify the forecasting performance and to explore the optimal prediction parameters for a given electromagnetic observation. The above methodology is firstly applied to ULF magnetic data and GPS-TEC data. The results show that the earthquake predictions based on electromagnetic anomalies are significantly better than random guesses, indicating the data contain potential useful precursory information. Meanwhile, we reveal the optimal prediction parameters for both observations. The methodology proposed in this study could be also applied to other pre-earthquake phenomena to find out whether there is precursory information, and then on this base explore the optimal alarm parameters in practical short-term forecast.

  7. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  8. A Damage-Dependent Finite Element Analysis for Fiber-Reinforced Composite Laminates

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.

    1998-01-01

    A progressive damage methodology has been developed to predict damage growth and residual strength of fiber-reinforced composite structure with through penetrations such as a slit. The methodology consists of a damage-dependent constitutive relationship based on continuum damage mechanics. Damage is modeled using volume averaged strain-like quantities known as internal state variables and is represented in the equilibrium equations as damage induced force vectors instead of the usual degradation and modification of the global stiffness matrix.

  9. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  10. Quantifying data worth toward reducing predictive uncertainty

    USGS Publications Warehouse

    Dausman, A.M.; Doherty, J.; Langevin, C.D.; Sukop, M.C.

    2010-01-01

    The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement. Journal compilation ?? 2010 National Ground Water Association.

  11. Integrating uniform design and response surface methodology to optimize thiacloprid suspension

    PubMed Central

    Li, Bei-xing; Wang, Wei-chang; Zhang, Xian-peng; Zhang, Da-xia; Mu, Wei; Liu, Feng

    2017-01-01

    A model 25% suspension concentrate (SC) of thiacloprid was adopted to evaluate an integrative approach of uniform design and response surface methodology. Tersperse2700, PE1601, xanthan gum and veegum were the four experimental factors, and the aqueous separation ratio and viscosity were the two dependent variables. Linear and quadratic polynomial models of stepwise regression and partial least squares were adopted to test the fit of the experimental data. Verification tests revealed satisfactory agreement between the experimental and predicted data. The measured values for the aqueous separation ratio and viscosity were 3.45% and 278.8 mPa·s, respectively, and the relative errors of the predicted values were 9.57% and 2.65%, respectively (prepared under the proposed conditions). Comprehensive benefits could also be obtained by appropriately adjusting the amount of certain adjuvants based on practical requirements. Integrating uniform design and response surface methodology is an effective strategy for optimizing SC formulas. PMID:28383036

  12. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data.

    PubMed

    Vanegas, Fernando; Bratanov, Dmitry; Powell, Kevin; Weiss, John; Gonzalez, Felipe

    2018-01-17

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used-the sensors, the UAV, and the flight operations-the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analising and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications.

  13. Thermal Hydraulics Design and Analysis Methodology for a Solid-Core Nuclear Thermal Rocket Engine Thrust Chamber

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Canabal, Francisco; Chen, Yen-Sen; Cheng, Gary; Ito, Yasushi

    2013-01-01

    Nuclear thermal propulsion is a leading candidate for in-space propulsion for human Mars missions. This chapter describes a thermal hydraulics design and analysis methodology developed at the NASA Marshall Space Flight Center, in support of the nuclear thermal propulsion development effort. The objective of this campaign is to bridge the design methods in the Rover/NERVA era, with a modern computational fluid dynamics and heat transfer methodology, to predict thermal, fluid, and hydrogen environments of a hypothetical solid-core, nuclear thermal engine the Small Engine, designed in the 1960s. The computational methodology is based on an unstructured-grid, pressure-based, all speeds, chemically reacting, computational fluid dynamics and heat transfer platform, while formulations of flow and heat transfer through porous and solid media were implemented to describe those of hydrogen flow channels inside the solid24 core. Design analyses of a single flow element and the entire solid-core thrust chamber of the Small Engine were performed and the results are presented herein

  14. CBT Specific Process in Exposure-Based Treatments: Initial Examination in a Pediatric OCD Sample

    PubMed Central

    Benito, Kristen Grabill; Conelea, Christine; Garcia, Abbe M.; Freeman, Jennifer B.

    2012-01-01

    Cognitive-Behavioral theory and empirical support suggest that optimal activation of fear is a critical component for successful exposure treatment. Using this theory, we developed coding methodology for measuring CBT-specific process during exposure. We piloted this methodology in a sample of young children (N = 18) who previously received CBT as part of a randomized controlled trial. Results supported the preliminary reliability and predictive validity of coding variables with 12 week and 3 month treatment outcome data, generally showing results consistent with CBT theory. However, given our limited and restricted sample, additional testing is warranted. Measurement of CBT-specific process using this methodology may have implications for understanding mechanism of change in exposure-based treatments and for improving dissemination efforts through identification of therapist behaviors associated with improved outcome. PMID:22523609

  15. Set-membership fault detection under noisy environment with application to the detection of abnormal aircraft control surface positions

    NASA Astrophysics Data System (ADS)

    El Houda Thabet, Rihab; Combastel, Christophe; Raïssi, Tarek; Zolghadri, Ali

    2015-09-01

    The paper develops a set membership detection methodology which is applied to the detection of abnormal positions of aircraft control surfaces. Robust and early detection of such abnormal positions is an important issue for early system reconfiguration and overall optimisation of aircraft design. In order to improve fault sensitivity while ensuring a high level of robustness, the method combines a data-driven characterisation of noise and a model-driven approach based on interval prediction. The efficiency of the proposed methodology is illustrated through simulation results obtained based on data recorded in several flight scenarios of a highly representative aircraft benchmark.

  16. Streakline-based closed-loop control of a bluff body flow

    NASA Astrophysics Data System (ADS)

    Roca, Pablo; Cammilleri, Ada; Duriez, Thomas; Mathelin, Lionel; Artana, Guillermo

    2014-04-01

    A novel closed-loop control methodology is introduced to stabilize a cylinder wake flow based on images of streaklines. Passive scalar tracers are injected upstream the cylinder and their concentration is monitored downstream at certain image sectors of the wake. An AutoRegressive with eXogenous inputs mathematical model is built from these images and a Generalized Predictive Controller algorithm is used to compute the actuation required to stabilize the wake by adding momentum tangentially to the cylinder wall through plasma actuators. The methodology is new and has real-world applications. It is demonstrated on a numerical simulation and the provided results show that good performances are achieved.

  17. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue.

    PubMed

    Laurinavicius, Arvydas; Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Dasevicius, Darius; Elie, Nicolas; Iqbal, Yasir; Bor, Catherine

    2014-01-01

    Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists' VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particularfor the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers.

  18. A methodology to ensure and improve accuracy of Ki67 labelling index estimation by automated digital image analysis in breast cancer tissue

    PubMed Central

    2014-01-01

    Introduction Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Methods Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. Results ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists’ VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particular for the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Conclusions Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers. PMID:24708745

  19. Finite element based model predictive control for active vibration suppression of a one-link flexible manipulator.

    PubMed

    Dubay, Rickey; Hassan, Marwan; Li, Chunying; Charest, Meaghan

    2014-09-01

    This paper presents a unique approach for active vibration control of a one-link flexible manipulator. The method combines a finite element model of the manipulator and an advanced model predictive controller to suppress vibration at its tip. This hybrid methodology improves significantly over the standard application of a predictive controller for vibration control. The finite element model used in place of standard modelling in the control algorithm provides a more accurate prediction of dynamic behavior, resulting in enhanced control. Closed loop control experiments were performed using the flexible manipulator, instrumented with strain gauges and piezoelectric actuators. In all instances, experimental and simulation results demonstrate that the finite element based predictive controller provides improved active vibration suppression in comparison with using a standard predictive control strategy. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Flow unit modeling and fine-scale predicted permeability validation in Atokan sandstones: Norcan East Kansas

    USGS Publications Warehouse

    Bhattacharya, S.; Byrnes, A.P.; Watney, W.L.; Doveton, J.H.

    2008-01-01

    Characterizing the reservoir interval into flow units is an effective way to subdivide the net-pay zone into layers for reservoir simulation. Commonly used flow unit identification techniques require a reliable estimate of permeability in the net pay on a foot-by-foot basis. Most of the wells do not have cores, and the literature is replete with different kinds of correlations, transforms, and prediction methods for profiling permeability in pay. However, for robust flow unit determination, predicted permeability at noncored wells requires validation and, if necessary, refinement. This study outlines the use o f a spreadsheet-based permeability validation technique to characterize flow units in wells from the Norcan East field, Clark County, Kansas, that produce from Atokan aged fine- to very fine-grained quartzarenite sandstones interpreted to have been deposited in brackish-water, tidally dominated restricted tidal-flat, tidal-channel, tidal-bar, and estuary bay environments within a small incised-valley-fill system. The methodology outlined enables the identification of fieldwide free-water level and validates and refines predicted permeability at 0.5-ft (0.15-m) intervals by iteratively reconciling differences in water saturation calculated from wire-line log and a capillary-pressure formulation that models fine- to very fine-grained sandstone with diagenetic clay and silt or shale laminae. The effectiveness of this methodology was confirmed by successfully matching primary and secondary production histories using a flow unit-based reservoir model of the Norcan East field without permeability modifications. The methodologies discussed should prove useful for robust flow unit characterization of different kinds of reservoirs. Copyright ?? 2008. The American Association of Petroleum Geologists. All rights reserved.

  1. Reliable estimates of predictive uncertainty for an Alpine catchment using a non-parametric methodology

    NASA Astrophysics Data System (ADS)

    Matos, José P.; Schaefli, Bettina; Schleiss, Anton J.

    2017-04-01

    Uncertainty affects hydrological modelling efforts from the very measurements (or forecasts) that serve as inputs to the more or less inaccurate predictions that are produced. Uncertainty is truly inescapable in hydrology and yet, due to the theoretical and technical hurdles associated with its quantification, it is at times still neglected or estimated only qualitatively. In recent years the scientific community has made a significant effort towards quantifying this hydrologic prediction uncertainty. Despite this, most of the developed methodologies can be computationally demanding, are complex from a theoretical point of view, require substantial expertise to be employed, and are constrained by a number of assumptions about the model error distribution. These assumptions limit the reliability of many methods in case of errors that show particular cases of non-normality, heteroscedasticity, or autocorrelation. The present contribution builds on a non-parametric data-driven approach that was developed for uncertainty quantification in operational (real-time) forecasting settings. The approach is based on the concept of Pareto optimality and can be used as a standalone forecasting tool or as a postprocessor. By virtue of its non-parametric nature and a general operating principle, it can be applied directly and with ease to predictions of streamflow, water stage, or even accumulated runoff. Also, it is a methodology capable of coping with high heteroscedasticity and seasonal hydrological regimes (e.g. snowmelt and rainfall driven events in the same catchment). Finally, the training and operation of the model are very fast, making it a tool particularly adapted to operational use. To illustrate its practical use, the uncertainty quantification method is coupled with a process-based hydrological model to produce statistically reliable forecasts for an Alpine catchment located in Switzerland. Results are presented and discussed in terms of their reliability and resolution.

  2. A novel registration-based methodology for prediction of trabecular bone fabric from clinical QCT: A comprehensive analysis

    PubMed Central

    Reyes, Mauricio; Zysset, Philippe

    2017-01-01

    Osteoporosis leads to hip fractures in aging populations and is diagnosed by modern medical imaging techniques such as quantitative computed tomography (QCT). Hip fracture sites involve trabecular bone, whose strength is determined by volume fraction and orientation, known as fabric. However, bone fabric cannot be reliably assessed in clinical QCT images of proximal femur. Accordingly, we propose a novel registration-based estimation of bone fabric designed to preserve tensor properties of bone fabric and to map bone fabric by a global and local decomposition of the gradient of a non-rigid image registration transformation. Furthermore, no comprehensive analysis on the critical components of this methodology has been previously conducted. Hence, the aim of this work was to identify the best registration-based strategy to assign bone fabric to the QCT image of a patient’s proximal femur. The normalized correlation coefficient and curvature-based regularization were used for image-based registration and the Frobenius norm of the stretch tensor of the local gradient was selected to quantify the distance among the proximal femora in the population. Based on this distance, closest, farthest and mean femora with a distinction of sex were chosen as alternative atlases to evaluate their influence on bone fabric prediction. Second, we analyzed different tensor mapping schemes for bone fabric prediction: identity, rotation-only, rotation and stretch tensor. Third, we investigated the use of a population average fabric atlas. A leave one out (LOO) evaluation study was performed with a dual QCT and HR-pQCT database of 36 pairs of human femora. The quality of the fabric prediction was assessed with three metrics, the tensor norm (TN) error, the degree of anisotropy (DA) error and the angular deviation of the principal tensor direction (PTD). The closest femur atlas (CTP) with a full rotation (CR) for fabric mapping delivered the best results with a TN error of 7.3 ± 0.9%, a DA error of 6.6 ± 1.3% and a PTD error of 25 ± 2°. The closest to the population mean femur atlas (MTP) using the same mapping scheme yielded only slightly higher errors than CTP for substantially less computing efforts. The population average fabric atlas yielded substantially higher errors than the MTP with the CR mapping scheme. Accounting for sex did not bring any significant improvements. The identified fabric mapping methodology will be exploited in patient-specific QCT-based finite element analysis of the proximal femur to improve the prediction of hip fracture risk. PMID:29176881

  3. Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology

    NASA Astrophysics Data System (ADS)

    Litvay, Robyn Olson

    Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.

  4. REE radiation fault model: a tool for organizing and communication radiation test data and construction COTS based spacebourne computing systems

    NASA Technical Reports Server (NTRS)

    Ferraro, R.; Some, R.

    2002-01-01

    The growth in data rates of instruments on future NASA spacecraft continues to outstrip the improvement in communications bandwidth and processing capabilities of radiation-hardened computers. Sophisticated autonomous operations strategies will further increase the processing workload. Given the reductions in spacecraft size and available power, standard radiation hardened computing systems alone will not be able to address the requirements of future missions. The REE project was intended to overcome this obstacle by developing a COTS- based supercomputer suitable for use as a science and autonomy data processor in most space environments. This development required a detailed knowledge of system behavior in the presence of Single Event Effect (SEE) induced faults so that mitigation strategies could be designed to recover system level reliability while maintaining the COTS throughput advantage. The REE project has developed a suite of tools and a methodology for predicting SEU induced transient fault rates in a range of natural space environments from ground-based radiation testing of component parts. In this paper we provide an overview of this methodology and tool set with a concentration on the radiation fault model and its use in the REE system development methodology. Using test data reported elsewhere in this and other conferences, we predict upset rates for a particular COTS single board computer configuration in several space environments.

  5. Algorithm for cellular reprogramming.

    PubMed

    Ronquist, Scott; Patterson, Geoff; Muir, Lindsey A; Lindsly, Stephen; Chen, Haiming; Brown, Markus; Wicha, Max S; Bloch, Anthony; Brockett, Roger; Rajapakse, Indika

    2017-11-07

    The day we understand the time evolution of subcellular events at a level of detail comparable to physical systems governed by Newton's laws of motion seems far away. Even so, quantitative approaches to cellular dynamics add to our understanding of cell biology. With data-guided frameworks we can develop better predictions about, and methods for, control over specific biological processes and system-wide cell behavior. Here we describe an approach for optimizing the use of transcription factors (TFs) in cellular reprogramming, based on a device commonly used in optimal control. We construct an approximate model for the natural evolution of a cell-cycle-synchronized population of human fibroblasts, based on data obtained by sampling the expression of 22,083 genes at several time points during the cell cycle. To arrive at a model of moderate complexity, we cluster gene expression based on division of the genome into topologically associating domains (TADs) and then model the dynamics of TAD expression levels. Based on this dynamical model and additional data, such as known TF binding sites and activity, we develop a methodology for identifying the top TF candidates for a specific cellular reprogramming task. Our data-guided methodology identifies a number of TFs previously validated for reprogramming and/or natural differentiation and predicts some potentially useful combinations of TFs. Our findings highlight the immense potential of dynamical models, mathematics, and data-guided methodologies for improving strategies for control over biological processes. Copyright © 2017 the Author(s). Published by PNAS.

  6. Simulation-Based Probabilistic Tsunami Hazard Analysis: Empirical and Robust Hazard Predictions

    NASA Astrophysics Data System (ADS)

    De Risi, Raffaele; Goda, Katsuichiro

    2017-08-01

    Probabilistic tsunami hazard analysis (PTHA) is the prerequisite for rigorous risk assessment and thus for decision-making regarding risk mitigation strategies. This paper proposes a new simulation-based methodology for tsunami hazard assessment for a specific site of an engineering project along the coast, or, more broadly, for a wider tsunami-prone region. The methodology incorporates numerous uncertain parameters that are related to geophysical processes by adopting new scaling relationships for tsunamigenic seismic regions. Through the proposed methodology it is possible to obtain either a tsunami hazard curve for a single location, that is the representation of a tsunami intensity measure (such as inundation depth) versus its mean annual rate of occurrence, or tsunami hazard maps, representing the expected tsunami intensity measures within a geographical area, for a specific probability of occurrence in a given time window. In addition to the conventional tsunami hazard curve that is based on an empirical statistical representation of the simulation-based PTHA results, this study presents a robust tsunami hazard curve, which is based on a Bayesian fitting methodology. The robust approach allows a significant reduction of the number of simulations and, therefore, a reduction of the computational effort. Both methods produce a central estimate of the hazard as well as a confidence interval, facilitating the rigorous quantification of the hazard uncertainties.

  7. [Optimization of process of icraiin be hydrolyzed to Baohuoside I by cellulase based on Plackett-Burman design combined with CCD response surface methodology].

    PubMed

    Song, Chuan-xia; Chen, Hong-mei; Dai, Yu; Kang, Min; Hu, Jia; Deng, Yun

    2014-11-01

    To optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase by Plackett-Burman design combined with Central Composite Design (CCD) response surface methodology. To select the main influencing factors by Plackett-Burman design, using CCD response surface methodology to optimize the process of Icraiin be hydrolyzed to Baohuoside I by cellulase. Taking substrate concentration, the pH of buffer and reaction time as independent variables, with conversion rate of icariin as dependent variable,using regression fitting of completely quadratic response surface between independent variable and dependent variable,the optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase was intuitively analyzed by 3D surface chart, and taking verification tests and predictive analysis. The best enzymatic hydrolytic process was as following: substrate concentration 8. 23 mg/mL, pH 5. 12 of buffer,reaction time 35. 34 h. The optimum process of Icraiin be hydrolyzed to Baohuoside I by cellulase is determined by Plackett-Burman design combined with CCD response surface methodology. The optimized enzymatic hydrolytic process is simple, convenient, accurate, reproducible and predictable.

  8. User's manual for the ALS base heating prediction code, volume 2

    NASA Technical Reports Server (NTRS)

    Reardon, John E.; Fulton, Michael S.

    1992-01-01

    The Advanced Launch System (ALS) Base Heating Prediction Code is based on a generalization of first principles in the prediction of plume induced base convective heating and plume radiation. It should be considered to be an approximate method for evaluating trends as a function of configuration variables because the processes being modeled are too complex to allow an accurate generalization. The convective methodology is based upon generalizing trends from four nozzle configurations, so an extension to use the code with strap-on boosters, multiple nozzle sizes, and variations in the propellants and chamber pressure histories cannot be precisely treated. The plume radiation is more amenable to precise computer prediction, but simplified assumptions are required to model the various aspects of the candidate configurations. Perhaps the most difficult area to characterize is the variation of radiation with altitude. The theory in the radiation predictions is described in more detail. This report is intended to familiarize a user with the interface operation and options, to summarize the limitations and restrictions of the code, and to provide information to assist in installing the code.

  9. Online intelligent controllers for an enzyme recovery plant: design methodology and performance.

    PubMed

    Leite, M S; Fujiki, T L; Silva, F V; Fileti, A M F

    2010-12-27

    This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity.

  10. Online Intelligent Controllers for an Enzyme Recovery Plant: Design Methodology and Performance

    PubMed Central

    Leite, M. S.; Fujiki, T. L.; Silva, F. V.; Fileti, A. M. F.

    2010-01-01

    This paper focuses on the development of intelligent controllers for use in a process of enzyme recovery from pineapple rind. The proteolytic enzyme bromelain (EC 3.4.22.4) is precipitated with alcohol at low temperature in a fed-batch jacketed tank. Temperature control is crucial to avoid irreversible protein denaturation. Fuzzy or neural controllers offer a way of implementing solutions that cover dynamic and nonlinear processes. The design methodology and a comparative study on the performance of fuzzy-PI, neurofuzzy, and neural network intelligent controllers are presented. To tune the fuzzy PI Mamdani controller, various universes of discourse, rule bases, and membership function support sets were tested. A neurofuzzy inference system (ANFIS), based on Takagi-Sugeno rules, and a model predictive controller, based on neural modeling, were developed and tested as well. Using a Fieldbus network architecture, a coolant variable speed pump was driven by the controllers. The experimental results show the effectiveness of fuzzy controllers in comparison to the neural predictive control. The fuzzy PI controller exhibited a reduced error parameter (ITAE), lower power consumption, and better recovery of enzyme activity. PMID:21234106

  11. Tidal current energy potential of Nalón river estuary assessment using a high precision flow model

    NASA Astrophysics Data System (ADS)

    Badano, Nicolás; Valdés, Rodolfo Espina; Álvarez, Eduardo Álvarez

    2018-05-01

    Obtaining energy from tide currents in onshore locations is of great interest due to the proximity to the points of consumption. This opens the door to the feasibility of new installations based on hydrokinetic microturbines even in zones of moderate speed. In this context, the accuracy of energy predictions based on hydrodynamic models is of paramount importance. This research presents a high precision methodology based on a multidimensional hydrodynamic model that is used to study the energetic potential in estuaries. Moreover, it is able to estimate the flow variations caused by microturbine installations. The paper also shows the results obtained from the application of the methodology in a study of the Nalón river mouth (Asturias, Spain).

  12. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    NASA Astrophysics Data System (ADS)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    2011-01-01

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.

  13. Improving orbit prediction accuracy through supervised machine learning

    NASA Astrophysics Data System (ADS)

    Peng, Hao; Bai, Xiaoli

    2018-05-01

    Due to the lack of information such as the space environment condition and resident space objects' (RSOs') body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs' trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: (1) the ML model can be used to improve the same RSO's orbit information that is not available during the learning process but shares the same time interval as the training data; (2) the ML model can be used to improve predictions of the same RSO at future epochs; and (3) the ML model based on a RSO can be applied to other RSOs that share some common features.

  14. Approaches to developing alternative and predictive toxicology based on PBPK/PD and QSAR modeling.

    PubMed Central

    Yang, R S; Thomas, R S; Gustafson, D L; Campain, J; Benjamin, S A; Verhaar, H J; Mumtaz, M M

    1998-01-01

    Systematic toxicity testing, using conventional toxicology methodologies, of single chemicals and chemical mixtures is highly impractical because of the immense numbers of chemicals and chemical mixtures involved and the limited scientific resources. Therefore, the development of unconventional, efficient, and predictive toxicology methods is imperative. Using carcinogenicity as an end point, we present approaches for developing predictive tools for toxicologic evaluation of chemicals and chemical mixtures relevant to environmental contamination. Central to the approaches presented is the integration of physiologically based pharmacokinetic/pharmacodynamic (PBPK/PD) and quantitative structure--activity relationship (QSAR) modeling with focused mechanistically based experimental toxicology. In this development, molecular and cellular biomarkers critical to the carcinogenesis process are evaluated quantitatively between different chemicals and/or chemical mixtures. Examples presented include the integration of PBPK/PD and QSAR modeling with a time-course medium-term liver foci assay, molecular biology and cell proliferation studies. Fourier transform infrared spectroscopic analyses of DNA changes, and cancer modeling to assess and attempt to predict the carcinogenicity of the series of 12 chlorobenzene isomers. Also presented is an ongoing effort to develop and apply a similar approach to chemical mixtures using in vitro cell culture (Syrian hamster embryo cell transformation assay and human keratinocytes) methodologies and in vivo studies. The promise and pitfalls of these developments are elaborated. When successfully applied, these approaches may greatly reduce animal usage, personnel, resources, and time required to evaluate the carcinogenicity of chemicals and chemical mixtures. Images Figure 6 PMID:9860897

  15. Reticulo-rumen temperature as a predictor of calving time in primiparous and parous Holstein females.

    PubMed

    Costa, J B G; Ahola, J K; Weller, Z D; Peel, R K; Whittier, J C; Barcellos, J O J

    2016-06-01

    The objective of this research was to define and analyze drops in reticulo-rumen temperature (Trr) as an indicator of calving time in Holstein females. Data were collected from 111 primiparous and 150 parous Holstein females between November 2012 and March 2013. Between -15 and -5 d relative to anticipated calving date, each female received an orally administered temperature sensing reticulo-rumen bolus that collected temperatures hourly. Daily mean Trr was calculated from d -5 to 0 relative to using all Trr values (A-Trr) or only Trr values ≥37.7°C (W-Trr) not altered by water intake. To identify a Trr drop, 2 methodologies for computing the baseline temperature were used. Generalized linear models (GLM) were used to estimate the probability of calving within the next 12 or 24 h for primiparous, parous, and all females, based on the size of the Trr drop. For all GLM, a large drop in Trr corresponded with a large estimated probability of calving. The predictive power of the GLM was assessed using receiver-operating characteristic (ROC) curves. The ROC curve analyses showed that all models, regardless of methodology in calculation of the baseline or tested category (primiparous or parous), were able to predict calving; however, area under the ROC curve values, an indication of prediction quality, were greater for methods predicting calving within 24 h. Further comparisons between GLM for primiparous and parous, and using baseline 1 and 2, provide insight on the differences in predictive performance. Based on the GLM, Trr drops of 0.2, 0.3, and 0.4°C were identified as useful indicators of parturition and further analyzed using sensitivity, specificity, and diagnostic odds ratios. Based on sensitivity, specificity, and diagnostic odds ratios, the best indicator of calving was an average Trr drop ≥0.2°C, regardless of methodology used to compute the baseline or category of animal evaluated. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Identifying Model-Based Reconfiguration Goals through Functional Deficiencies

    NASA Technical Reports Server (NTRS)

    Benazera, Emmanuel; Trave-Massuyes, Louise

    2004-01-01

    Model-based diagnosis is now advanced to the point autonomous systems face some uncertain and faulty situations with success. The next step toward more autonomy is to have the system recovering itself after faults occur, a process known as model-based reconfiguration. After faults occur, given a prediction of the nominal behavior of the system and the result of the diagnosis operation, this paper details how to automatically determine the functional deficiencies of the system. These deficiencies are characterized in the case of uncertain state estimates. A methodology is then presented to determine the reconfiguration goals based on the deficiencies. Finally, a recovery process interleaves planning and model predictive control to restore the functionalities in prioritized order.

  17. Reporting and methodological quality of meta-analyses in urological literature.

    PubMed

    Xia, Leilei; Xu, Jing; Guzzo, Thomas J

    2017-01-01

    To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, " a priori " design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and " a priori " design were associated with superior reporting quality, following PRISMA guideline and " a priori " design were associated with superior methodological quality. Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having " a priori " protocol.

  18. Building protein-protein interaction networks for Leishmania species through protein structural information.

    PubMed

    Dos Santos Vasconcelos, Crhisllane Rafaele; de Lima Campos, Túlio; Rezende, Antonio Mauro

    2018-03-06

    Systematic analysis of a parasite interactome is a key approach to understand different biological processes. It makes possible to elucidate disease mechanisms, to predict protein functions and to select promising targets for drug development. Currently, several approaches for protein interaction prediction for non-model species incorporate only small fractions of the entire proteomes and their interactions. Based on this perspective, this study presents an integration of computational methodologies, protein network predictions and comparative analysis of the protozoan species Leishmania braziliensis and Leishmania infantum. These parasites cause Leishmaniasis, a worldwide distributed and neglected disease, with limited treatment options using currently available drugs. The predicted interactions were obtained from a meta-approach, applying rigid body docking tests and template-based docking on protein structures predicted by different comparative modeling techniques. In addition, we trained a machine-learning algorithm (Gradient Boosting) using docking information performed on a curated set of positive and negative protein interaction data. Our final model obtained an AUC = 0.88, with recall = 0.69, specificity = 0.88 and precision = 0.83. Using this approach, it was possible to confidently predict 681 protein structures and 6198 protein interactions for L. braziliensis, and 708 protein structures and 7391 protein interactions for L. infantum. The predicted networks were integrated to protein interaction data already available, analyzed using several topological features and used to classify proteins as essential for network stability. The present study allowed to demonstrate the importance of integrating different methodologies of interaction prediction to increase the coverage of the protein interaction of the studied protocols, besides it made available protein structures and interactions not previously reported.

  19. Phase behaviour, interactions, and structural studies of (amines+ionic liquids) binary mixtures.

    PubMed

    Jacquemin, Johan; Bendová, Magdalena; Sedláková, Zuzana; Blesic, Marijana; Holbrey, John D; Mullan, Claire L; Youngs, Tristan G A; Pison, Laure; Wagner, Zdeněk; Aim, Karel; Costa Gomes, Margarida F; Hardacre, Christopher

    2012-05-14

    We present a study on the phase equilibrium behaviour of binary mixtures containing two 1-alkyl-3-methylimidazolium bis{(trifluoromethyl)sulfonyl}imide-based ionic liquids, [C(n)mim] [NTf(2)] (n=2 and 4), mixed with diethylamine or triethylamine as a function of temperature and composition using different experimental techniques. Based on this work, two systems showing an LCST and one system with a possible hourglass shape are measured. Their phase behaviours are then correlated and predicted by using Flory-Huggins equations and the UNIQUAC method implemented in Aspen. The potential of the COSMO-RS methodology to predict the phase equilibria was also tested for the binary systems studied. However, this methodology is unable to predict the trends obtained experimentally, limiting its use for systems involving amines in ionic liquids. The liquid-state structure of the binary mixture ([C(2)mim] [NTf(2)]+diethylamine) is also investigated by molecular dynamics simulation and neutron diffraction. Finally, the absorption of gaseous ethane by the ([C(2)mim][NTf(2)]+diethylamine) binary mixture is determined and compared with that observed in the pure solvents. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Applications of a damage tolerance analysis methodology in aircraft design and production

    NASA Technical Reports Server (NTRS)

    Woodward, M. R.; Owens, S. D.; Law, G. E.; Mignery, L. A.

    1992-01-01

    Objectives of customer mandated aircraft structural integrity initiatives in design are to guide material selection, to incorporate fracture resistant concepts in the design, to utilize damage tolerance based allowables and planned inspection procedures necessary to enhance the safety and reliability of manned flight vehicles. However, validated fracture analysis tools for composite structures are needed to accomplish these objectives in a timely and economical manner. This paper briefly describes the development, validation, and application of a damage tolerance methodology for composite airframe structures. A closed-form analysis code, entitled SUBLAM was developed to predict the critical biaxial strain state necessary to cause sublaminate buckling-induced delamination extension in an impact damaged composite laminate. An embedded elliptical delamination separating a thin sublaminate from a thick parent laminate is modelled. Predicted failure strains were correlated against a variety of experimental data that included results from compression after impact coupon and element tests. An integrated analysis package was developed to predict damage tolerance based margin-of-safety (MS) using NASTRAN generated loads and element information. Damage tolerance aspects of new concepts are quickly and cost-effectively determined without the need for excessive testing.

  1. Parts and Components Reliability Assessment: A Cost Effective Approach

    NASA Technical Reports Server (NTRS)

    Lee, Lydia

    2009-01-01

    System reliability assessment is a methodology which incorporates reliability analyses performed at parts and components level such as Reliability Prediction, Failure Modes and Effects Analysis (FMEA) and Fault Tree Analysis (FTA) to assess risks, perform design tradeoffs, and therefore, to ensure effective productivity and/or mission success. The system reliability is used to optimize the product design to accommodate today?s mandated budget, manpower, and schedule constraints. Stand ard based reliability assessment is an effective approach consisting of reliability predictions together with other reliability analyses for electronic, electrical, and electro-mechanical (EEE) complex parts and components of large systems based on failure rate estimates published by the United States (U.S.) military or commercial standards and handbooks. Many of these standards are globally accepted and recognized. The reliability assessment is especially useful during the initial stages when the system design is still in the development and hard failure data is not yet available or manufacturers are not contractually obliged by their customers to publish the reliability estimates/predictions for their parts and components. This paper presents a methodology to assess system reliability using parts and components reliability estimates to ensure effective productivity and/or mission success in an efficient manner, low cost, and tight schedule.

  2. Some practical observations on the accelerated testing of Nickel-Cadmium Cells

    NASA Technical Reports Server (NTRS)

    Mcdermott, P. P.

    1979-01-01

    A large scale test of 6.0 Ah Nickel-Cadmium Cells conducted at the Naval Weapons Support Center, Crane, Indiana has demonstrated a methodology for predicting battery life based on failure data from cells cycled in an accelerated mode. After examining eight variables used to accelerate failure, it was determined that temperature and depth of discharge were the most reliable and efficient parameters for use in accelerating failure and for predicting life.

  3. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience

    PubMed Central

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence. PMID:26029135

  4. From the eyes and the heart: a novel eye-gaze metric that predicts video preferences of a large audience.

    PubMed

    Christoforou, Christoforos; Christou-Champi, Spyros; Constantinidou, Fofi; Theodorou, Maria

    2015-01-01

    Eye-tracking has been extensively used to quantify audience preferences in the context of marketing and advertising research, primarily in methodologies involving static images or stimuli (i.e., advertising, shelf testing, and website usability). However, these methodologies do not generalize to narrative-based video stimuli where a specific storyline is meant to be communicated to the audience. In this paper, a novel metric based on eye-gaze dispersion (both within and across viewings) that quantifies the impact of narrative-based video stimuli to the preferences of large audiences is presented. The metric is validated in predicting the performance of video advertisements aired during the 2014 Super Bowl final. In particular, the metric is shown to explain 70% of the variance in likeability scores of the 2014 Super Bowl ads as measured by the USA TODAY Ad-Meter. In addition, by comparing the proposed metric with Heart Rate Variability (HRV) indices, we have associated the metric with biological processes relating to attention allocation. The underlying idea behind the proposed metric suggests a shift in perspective when it comes to evaluating narrative-based video stimuli. In particular, it suggests that audience preferences on video are modulated by the level of viewers lack of attention allocation. The proposed metric can be calculated on any narrative-based video stimuli (i.e., movie, narrative content, emotional content, etc.), and thus has the potential to facilitate the use of such stimuli in several contexts: prediction of audience preferences of movies, quantitative assessment of entertainment pieces, prediction of the impact of movie trailers, identification of group, and individual differences in the study of attention-deficit disorders, and the study of desensitization to media violence.

  5. Pattern recognition of satellite cloud imagery for improved weather prediction

    NASA Technical Reports Server (NTRS)

    Gautier, Catherine; Somerville, Richard C. J.; Volfson, Leonid B.

    1986-01-01

    The major accomplishment was the successful development of a method for extracting time derivative information from geostationary meteorological satellite imagery. This research is a proof-of-concept study which demonstrates the feasibility of using pattern recognition techniques and a statistical cloud classification method to estimate time rate of change of large-scale meteorological fields from remote sensing data. The cloud classification methodology is based on typical shape function analysis of parameter sets characterizing the cloud fields. The three specific technical objectives, all of which were successfully achieved, are as follows: develop and test a cloud classification technique based on pattern recognition methods, suitable for the analysis of visible and infrared geostationary satellite VISSR imagery; develop and test a methodology for intercomparing successive images using the cloud classification technique, so as to obtain estimates of the time rate of change of meteorological fields; and implement this technique in a testbed system incorporating an interactive graphics terminal to determine the feasibility of extracting time derivative information suitable for comparison with numerical weather prediction products.

  6. Expert system and process optimization techniques for real-time monitoring and control of plasma processes

    NASA Astrophysics Data System (ADS)

    Cheng, Jie; Qian, Zhaogang; Irani, Keki B.; Etemad, Hossein; Elta, Michael E.

    1991-03-01

    To meet the ever-increasing demand of the rapidly-growing semiconductor manufacturing industry it is critical to have a comprehensive methodology integrating techniques for process optimization real-time monitoring and adaptive process control. To this end we have accomplished an integrated knowledge-based approach combining latest expert system technology machine learning method and traditional statistical process control (SPC) techniques. This knowledge-based approach is advantageous in that it makes it possible for the task of process optimization and adaptive control to be performed consistently and predictably. Furthermore this approach can be used to construct high-level and qualitative description of processes and thus make the process behavior easy to monitor predict and control. Two software packages RIST (Rule Induction and Statistical Testing) and KARSM (Knowledge Acquisition from Response Surface Methodology) have been developed and incorporated with two commercially available packages G2 (real-time expert system) and ULTRAMAX (a tool for sequential process optimization).

  7. Computational Study of Laminar Flow Control on a Subsonic Swept Wing Using Discrete Roughness Elements

    NASA Technical Reports Server (NTRS)

    Li, Fei; Choudhari, Meelan M.; Chang, Chau-Lyan; Streett, Craig L.; Carpenter, Mark H.

    2011-01-01

    A combination of parabolized stability equations and secondary instability theory has been applied to a low-speed swept airfoil model with a chord Reynolds number of 7.15 million, with the goals of (i) evaluating this methodology in the context of transition prediction for a known configuration for which roughness based crossflow transition control has been demonstrated under flight conditions and (ii) of analyzing the mechanism of transition delay via the introduction of discrete roughness elements (DRE). Roughness based transition control involves controlled seeding of suitable, subdominant crossflow modes, so as to weaken the growth of naturally occurring, linearly more unstable crossflow modes. Therefore, a synthesis of receptivity, linear and nonlinear growth of stationary crossflow disturbances, and the ensuing development of high frequency secondary instabilities is desirable to understand the experimentally observed transition behavior. With further validation, such higher fidelity prediction methodology could be utilized to assess the potential for crossflow transition control at even higher Reynolds numbers, where experimental data is currently unavailable.

  8. Multiphysics Thermal-Fluid Design Analysis of a Non-Nuclear Tester for Hot-Hydrogen Materials and Component Development

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See; Foote, John; Litchford, Ron

    2006-01-01

    The objective of this effort is to perform design analyses for a non-nuclear hot-hydrogen materials tester, as a first step towards developing efficient and accurate multiphysics, thermo-fluid computational methodology to predict environments for hypothetical solid-core, nuclear thermal engine thrust chamber design and analysis. The computational methodology is based on a multidimensional, finite-volume, turbulent, chemically reacting, thermally radiating, unstructured-grid, and pressure-based formulation. The multiphysics invoked in this study include hydrogen dissociation kinetics and thermodynamics, turbulent flow, convective, and thermal radiative heat transfers. The goals of the design analyses are to maintain maximum hot-hydrogen jet impingement energy and to minimize chamber wall heating. The results of analyses on three test fixture configurations and the rationale for final selection are presented. The interrogation of physics revealed that reactions of hydrogen dissociation and recombination are highly correlated with local temperature and are necessary for accurate prediction of the hot-hydrogen jet temperature.

  9. Accurate electrical prediction of memory array through SEM-based edge-contour extraction using SPICE simulation

    NASA Astrophysics Data System (ADS)

    Shauly, Eitan; Rotstein, Israel; Peltinov, Ram; Latinski, Sergei; Adan, Ofer; Levi, Shimon; Menadeva, Ovadya

    2009-03-01

    The continues transistors scaling efforts, for smaller devices, similar (or larger) drive current/um and faster devices, increase the challenge to predict and to control the transistor off-state current. Typically, electrical simulators like SPICE, are using the design intent (as-drawn GDS data). At more sophisticated cases, the simulators are fed with the pattern after lithography and etch process simulations. As the importance of electrical simulation accuracy is increasing and leakage is becoming more dominant, there is a need to feed these simulators, with more accurate information extracted from physical on-silicon transistors. Our methodology to predict changes in device performances due to systematic lithography and etch effects was used in this paper. In general, the methodology consists on using the OPCCmaxTM for systematic Edge-Contour-Extraction (ECE) from transistors, taking along the manufacturing and includes any image distortions like line-end shortening, corner rounding and line-edge roughness. These measurements are used for SPICE modeling. Possible application of this new metrology is to provide a-head of time, physical and electrical statistical data improving time to market. In this work, we applied our methodology to analyze a small and large array's of 2.14um2 6T-SRAM, manufactured using Tower Standard Logic for General Purposes Platform. 4 out of the 6 transistors used "U-Shape AA", known to have higher variability. The predicted electrical performances of the transistors drive current and leakage current, in terms of nominal values and variability are presented. We also used the methodology to analyze an entire SRAM Block array. Study of an isolation leakage and variability are presented.

  10. The determination of operational and support requirements and costs during the conceptual design of space systems

    NASA Technical Reports Server (NTRS)

    Ebeling, Charles; Beasley, Kenneth D.

    1992-01-01

    The first year of research to provide NASA support in predicting operational and support parameters and costs of proposed space systems is reported. Some of the specific research objectives were (1) to develop a methodology for deriving reliability and maintainability parameters and, based upon their estimates, determine the operational capability and support costs, and (2) to identify data sources and establish an initial data base to implement the methodology. Implementation of the methodology is accomplished through the development of a comprehensive computer model. While the model appears to work reasonably well when applied to aircraft systems, it was not accurate when used for space systems. The model is dynamic and should be updated as new data become available. It is particularly important to integrate the current aircraft data base with data obtained from the Space Shuttle and other space systems since subsystems unique to a space vehicle require data not available from aircraft. This research only addressed the major subsystems on the vehicle.

  11. Nigerian Library Staff and Their Perceptions of Health Risks Posed by Using Computer-Based Systems in University Libraries

    ERIC Educational Resources Information Center

    Uwaifo, Stephen Osahon

    2008-01-01

    Purpose: The paper seeks to examine the health risks faced when using computer-based systems by library staff in Nigerian libraries. Design/methodology/approach: The paper uses a survey research approach to carry out this investigation. Findings: The investigation reveals that the perceived health risk does not predict perceived ease of use of…

  12. Error-rate prediction for programmable circuits: methodology, tools and studied cases

    NASA Astrophysics Data System (ADS)

    Velazco, Raoul

    2013-05-01

    This work presents an approach to predict the error rates due to Single Event Upsets (SEU) occurring in programmable circuits as a consequence of the impact or energetic particles present in the environment the circuits operate. For a chosen application, the error-rate is predicted by combining the results obtained from radiation ground testing and the results of fault injection campaigns performed off-beam during which huge numbers of SEUs are injected during the execution of the studied application. The goal of this strategy is to obtain accurate results about different applications' error rates, without using particle accelerator facilities, thus significantly reducing the cost of the sensitivity evaluation. As a case study, this methodology was applied a complex processor, the Power PC 7448 executing a program issued from a real space application and a crypto-processor application implemented in an SRAM-based FPGA and accepted to be embedded in the payload of a scientific satellite of NASA. The accuracy of predicted error rates was confirmed by comparing, for the same circuit and application, predictions with measures issued from radiation ground testing performed at the cyclotron Cyclone cyclotron of HIF (Heavy Ion Facility) of Louvain-la-Neuve (Belgium).

  13. Predicting U.S. food demand in the 20th century: a new look at system dynamics

    NASA Astrophysics Data System (ADS)

    Moorthy, Mukund; Cellier, Francois E.; LaFrance, Jeffrey T.

    1998-08-01

    The paper describes a new methodology for predicting the behavior of macroeconomic variables. The approach is based on System Dynamics and Fuzzy Inductive Reasoning. A four- layer pseudo-hierarchical model is proposed. The bottom layer makes predications about population dynamics, age distributions among the populace, as well as demographics. The second layer makes predications about the general state of the economy, including such variables as inflation and unemployment. The third layer makes predictions about the demand for certain goods or services, such as milk products, used cars, mobile telephones, or internet services. The fourth and top layer makes predictions about the supply of such goods and services, both in terms of their prices. Each layer can be influenced by control variables the values of which are only determined at higher levels. In this sense, the model is not strictly hierarchical. For example, the demand for goods at level three depends on the prices of these goods, which are only determined at level four. Yet, the prices are themselves influenced by the expected demand. The methodology is exemplified by means of a macroeconomic model that makes predictions about US food demand during the 20th century.

  14. A Predictive Safety Management System Software Package Based on the Continuous Hazard Tracking and Failure Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Quintana, Rolando

    2003-01-01

    The goal of this research was to integrate a previously validated and reliable safety model, called Continuous Hazard Tracking and Failure Prediction Methodology (CHTFPM), into a software application. This led to the development of a safety management information system (PSMIS). This means that the theory or principles of the CHTFPM were incorporated in a software package; hence, the PSMIS is referred to as CHTFPM management information system (CHTFPM MIS). The purpose of the PSMIS is to reduce the time and manpower required to perform predictive studies as well as to facilitate the handling of enormous quantities of information in this type of studies. The CHTFPM theory encompasses the philosophy of looking at the concept of safety engineering from a new perspective: from a proactive, than a reactive, viewpoint. That is, corrective measures are taken before a problem instead of after it happened. That is why the CHTFPM is a predictive safety because it foresees or anticipates accidents, system failures and unacceptable risks; therefore, corrective action can be taken in order to prevent all these unwanted issues. Consequently, safety and reliability of systems or processes can be further improved by taking proactive and timely corrective actions.

  15. Monitoring waterbird abundance in wetlands: The importance of controlling results for variation in water depth

    USGS Publications Warehouse

    Bolduc, F.; Afton, A.D.

    2008-01-01

    Wetland use by waterbirds is highly dependent on water depth, and depth requirements generally vary among species. Furthermore, water depth within wetlands often varies greatly over time due to unpredictable hydrological events, making comparisons of waterbird abundance among wetlands difficult as effects of habitat variables and water depth are confounded. Species-specific relationships between bird abundance and water depth necessarily are non-linear; thus, we developed a methodology to correct waterbird abundance for variation in water depth, based on the non-parametric regression of these two variables. Accordingly, we used the difference between observed and predicted abundances from non-parametric regression (analogous to parametric residuals) as an estimate of bird abundance at equivalent water depths. We scaled this difference to levels of observed and predicted abundances using the formula: ((observed - predicted abundance)/(observed + predicted abundance)) ?? 100. This estimate also corresponds to the observed:predicted abundance ratio, which allows easy interpretation of results. We illustrated this methodology using two hypothetical species that differed in water depth and wetland preferences. Comparisons of wetlands, using both observed and relative corrected abundances, indicated that relative corrected abundance adequately separates the effect of water depth from the effect of wetlands. ?? 2008 Elsevier B.V.

  16. Models to predict length of stay in the Intensive Care Unit after coronary artery bypass grafting: a systematic review.

    PubMed

    Atashi, Alireza; Verburg, Ilona W; Karim, Hesam; Miri, Mirmohammad; Abu-Hanna, Ameen; de Jonge, Evert; de Keizer, Nicolette F; Eslami, Saeid

    2018-06-01

    Intensive Care Units (ICU) length of stay (LoS) prediction models are used to compare different institutions and surgeons on their performance, and is useful as an efficiency indicator for quality control. There is little consensus about which prediction methods are most suitable to predict (ICU) length of stay. The aim of this study is to systematically review models for predicting ICU LoS after coronary artery bypass grafting and to assess the reporting and methodological quality of these models to apply them for benchmarking. A general search was conducted in Medline and Embase up to 31-12-2016. Three authors classified the papers for inclusion by reading their title, abstract and full text. All original papers describing development and/or validation of a prediction model for LoS in the ICU after CABG surgery were included. We used a checklist developed for critical appraisal and data extraction for systematic reviews of prediction modeling and extended it on handling specific patients subgroups. We also defined other items and scores to assess the methodological and reporting quality of the models. Of 5181 uniquely identified articles, fifteen studies were included of which twelve on development of new models and three on validation of existing models. All studies used linear or logistic regression as method for model development, and reported various performance measures based on the difference between predicted and observed ICU LoS. Most used a prospective (46.6%) or retrospective study design (40%). We found heterogeneity in patient inclusion/exclusion criteria; sample size; reported accuracy rates; and methods of candidate predictor selection. Most (60%) studies have not mentioned the handling of missing values and none compared the model outcome measure of survivors with non-survivors. For model development and validation studies respectively, the maximum reporting (methodological) scores were 66/78 and 62/62 (14/22 and 12/22). There are relatively few models for predicting ICU length of stay after CABG. Several aspects of methodological and reporting quality of studies in this field should be improved. There is a need for standardizing outcome and risk factor definitions in order to develop/validate a multi-institutional and international risk scoring system.

  17. Water level management of lakes connected to regulated rivers: An integrated modeling and analytical methodology

    NASA Astrophysics Data System (ADS)

    Hu, Tengfei; Mao, Jingqiao; Pan, Shunqi; Dai, Lingquan; Zhang, Peipei; Xu, Diandian; Dai, Huichao

    2018-07-01

    Reservoir operations significantly alter the hydrological regime of the downstream river and river-connected lake, which has far-reaching impacts on the lake ecosystem. To facilitate the management of lakes connected to regulated rivers, the following information must be provided: (1) the response of lake water levels to reservoir operation schedules in the near future and (2) the importance of different rivers in terms of affecting the water levels in different lake regions of interest. We develop an integrated modeling and analytical methodology for the water level management of such lakes. The data-driven method is used to model the lake level as it has the potential of producing quick and accurate predictions. A new genetic algorithm-based synchronized search is proposed to optimize input variable time lags and data-driven model parameters simultaneously. The methodology also involves the orthogonal design and range analysis for extracting the influence of an individual river from that of all the rivers. The integrated methodology is applied to the second largest freshwater lake in China, the Dongting Lake. The results show that: (1) the antecedent lake levels are of crucial importance for the current lake level prediction; (2) the selected river discharge time lags reflect the spatial heterogeneity of the rivers' impacts on lake level changes; (3) the predicted lake levels are in very good agreement with the observed data (RMSE ≤ 0.091 m; R2 ≥ 0.9986). This study demonstrates the practical potential of the integrated methodology, which can provide both the lake level responses to future dam releases and the relative contributions of different rivers to lake level changes.

  18. GAPIT: genome association and prediction integrated tool.

    PubMed

    Lipka, Alexander E; Tian, Feng; Wang, Qishan; Peiffer, Jason; Li, Meng; Bradbury, Peter J; Gore, Michael A; Buckler, Edward S; Zhang, Zhiwu

    2012-09-15

    Software programs that conduct genome-wide association studies and genomic prediction and selection need to use methodologies that maximize statistical power, provide high prediction accuracy and run in a computationally efficient manner. We developed an R package called Genome Association and Prediction Integrated Tool (GAPIT) that implements advanced statistical methods including the compressed mixed linear model (CMLM) and CMLM-based genomic prediction and selection. The GAPIT package can handle large datasets in excess of 10 000 individuals and 1 million single-nucleotide polymorphisms with minimal computational time, while providing user-friendly access and concise tables and graphs to interpret results. http://www.maizegenetics.net/GAPIT. zhiwu.zhang@cornell.edu Supplementary data are available at Bioinformatics online.

  19. Rolling-Bearing Service Life Based on Probable Cause for Removal: A Tutorial

    NASA Technical Reports Server (NTRS)

    Zaretsky, Erwin V.; Branzai, Emanuel V.

    2017-01-01

    In 1947 and 1952, Gustaf Lundberg and Arvid Palmgren developed what is now referred to as the Lundberg-Palmgren Model for Rolling Bearing Life Prediction based on classical rolling-element fatigue. Today, bearing fatigue probably accounts for less than 5 percent of bearings removed from service for cause. A bearing service life prediction methodology and tutorial indexed to eight probable causes for bearing removal, including fatigue, are presented, which incorporate strict series reliability; Weibull statistical analysis; available published field data from the Naval Air Rework Facility; and 224,000 rolling-element bearings removed for rework from commercial aircraft engines.

  20. Analyzing of economic growth based on electricity consumption from different sources

    NASA Astrophysics Data System (ADS)

    Maksimović, Goran; Milosavljević, Valentina; Ćirković, Bratislav; Milošević, Božidar; Jović, Srđan; Alizamir, Meysam

    2017-10-01

    Economic growth could be influenced by different factors. In this study was analyzed the economic growth based on the electricity consumption form different sources. As economic growth indicator gross domestic product (GDP) was used. ANFIS (adaptive neuro fuzzy inference system) methodology was applied to determine the most important factors from the given set for the GDP growth prediction. Six inputs were used: electricity production from coal, hydroelectric, natural gas, nuclear, oil and renewable sources. Results shown that the electricity consumption from renewable sources has the highest impact on the economic or GDP growth prediction.

  1. Prediction of Protein Configurational Entropy (Popcoen).

    PubMed

    Goethe, Martin; Gleixner, Jan; Fita, Ignacio; Rubi, J Miguel

    2018-03-13

    A knowledge-based method for configurational entropy prediction of proteins is presented; this methodology is extremely fast, compared to previous approaches, because it does not involve any type of configurational sampling. Instead, the configurational entropy of a query fold is estimated by evaluating an artificial neural network, which was trained on molecular-dynamics simulations of ∼1000 proteins. The predicted entropy can be incorporated into a large class of protein software based on cost-function minimization/evaluation, in which configurational entropy is currently neglected for performance reasons. Software of this type is used for all major protein tasks such as structure predictions, proteins design, NMR and X-ray refinement, docking, and mutation effect predictions. Integrating the predicted entropy can yield a significant accuracy increase as we show exemplarily for native-state identification with the prominent protein software FoldX. The method has been termed Popcoen for Prediction of Protein Configurational Entropy. An implementation is freely available at http://fmc.ub.edu/popcoen/ .

  2. A Novel Methodology for Improving Plant Pest Surveillance in Vineyards and Crops Using UAV-Based Hyperspectral and Spatial Data

    PubMed Central

    Vanegas, Fernando; Weiss, John; Gonzalez, Felipe

    2018-01-01

    Recent advances in remote sensed imagery and geospatial image processing using unmanned aerial vehicles (UAVs) have enabled the rapid and ongoing development of monitoring tools for crop management and the detection/surveillance of insect pests. This paper describes a (UAV) remote sensing-based methodology to increase the efficiency of existing surveillance practices (human inspectors and insect traps) for detecting pest infestations (e.g., grape phylloxera in vineyards). The methodology uses a UAV integrated with advanced digital hyperspectral, multispectral, and RGB sensors. We implemented the methodology for the development of a predictive model for phylloxera detection. In this method, we explore the combination of airborne RGB, multispectral, and hyperspectral imagery with ground-based data at two separate time periods and under different levels of phylloxera infestation. We describe the technology used—the sensors, the UAV, and the flight operations—the processing workflow of the datasets from each imagery type, and the methods for combining multiple airborne with ground-based datasets. Finally, we present relevant results of correlation between the different processed datasets. The objective of this research is to develop a novel methodology for collecting, processing, analysing and integrating multispectral, hyperspectral, ground and spatial data to remote sense different variables in different applications, such as, in this case, plant pest surveillance. The development of such methodology would provide researchers, agronomists, and UAV practitioners reliable data collection protocols and methods to achieve faster processing techniques and integrate multiple sources of data in diverse remote sensing applications. PMID:29342101

  3. Prediction of SOFC Performance with or without Experiments: A Study on Minimum Requirements for Experimental Data

    DOE PAGES

    Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...

    2015-06-02

    In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less

  4. Modelling Behaviour of a Carbon Epoxy Composite Exposed to Fire: Part II—Comparison with Experimental Results

    PubMed Central

    Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge

    2017-01-01

    Based on a phenomenological methodology, a three dimensional (3D) thermochemical model was developed to predict the temperature profile, the mass loss and the decomposition front of a carbon-reinforced epoxy composite laminate (T700/M21 composite) exposed to fire conditions. This 3D model takes into account the energy accumulation by the solid material, the anisotropic heat conduction, the thermal decomposition of the material, the gas mass flow into the composite, and the internal pressure. Thermophysical properties defined as temperature dependant properties were characterised using existing as well as innovative methodologies in order to use them as inputs into our physical model. The 3D thermochemical model accurately predicts the measured mass loss and observed decomposition front when the carbon fibre/epoxy composite is directly impacted by a propane flame. In short, the model shows its capability to predict the fire behaviour of a carbon fibre reinforced composite for fire safety engineering. PMID:28772836

  5. A method to identify and analyze biological programs through automated reasoning

    PubMed Central

    Yordanov, Boyan; Dunn, Sara-Jane; Kugler, Hillel; Smith, Austin; Martello, Graziano; Emmott, Stephen

    2016-01-01

    Predictive biology is elusive because rigorous, data-constrained, mechanistic models of complex biological systems are difficult to derive and validate. Current approaches tend to construct and examine static interaction network models, which are descriptively rich, but often lack explanatory and predictive power, or dynamic models that can be simulated to reproduce known behavior. However, in such approaches implicit assumptions are introduced as typically only one mechanism is considered, and exhaustively investigating all scenarios is impractical using simulation. To address these limitations, we present a methodology based on automated formal reasoning, which permits the synthesis and analysis of the complete set of logical models consistent with experimental observations. We test hypotheses against all candidate models, and remove the need for simulation by characterizing and simultaneously analyzing all mechanistic explanations of observed behavior. Our methodology transforms knowledge of complex biological processes from sets of possible interactions and experimental observations to precise, predictive biological programs governing cell function. PMID:27668090

  6. A novel methodology to estimate the evolution of construction waste in construction sites.

    PubMed

    Katz, Amnon; Baum, Hadassa

    2011-02-01

    This paper focuses on the accumulation of construction waste generated throughout the erection of new residential buildings. A special methodology was developed in order to provide a model that will predict the flow of construction waste. The amount of waste and its constituents, produced on 10 relatively large construction sites (7000-32,000 m(2) of built area) was monitored periodically for a limited time. A model that predicts the accumulation of construction waste was developed based on these field observations. According to the model, waste accumulates in an exponential manner, i.e. smaller amounts are generated during the early stages of construction and increasing amounts are generated towards the end of the project. The total amount of waste from these sites was estimated at 0.2m(3) per 1m(2) floor area. A good correlation was found between the model predictions and actual data from the field survey. Copyright © 2010 Elsevier Ltd. All rights reserved.

  7. Modelling Behaviour of a Carbon Epoxy Composite Exposed to Fire: Part II-Comparison with Experimental Results.

    PubMed

    Tranchard, Pauline; Samyn, Fabienne; Duquesne, Sophie; Estèbe, Bruno; Bourbigot, Serge

    2017-04-28

    Based on a phenomenological methodology, a three dimensional (3D) thermochemical model was developed to predict the temperature profile, the mass loss and the decomposition front of a carbon-reinforced epoxy composite laminate (T700/M21 composite) exposed to fire conditions. This 3D model takes into account the energy accumulation by the solid material, the anisotropic heat conduction, the thermal decomposition of the material, the gas mass flow into the composite, and the internal pressure. Thermophysical properties defined as temperature dependant properties were characterised using existing as well as innovative methodologies in order to use them as inputs into our physical model. The 3D thermochemical model accurately predicts the measured mass loss and observed decomposition front when the carbon fibre/epoxy composite is directly impacted by a propane flame. In short, the model shows its capability to predict the fire behaviour of a carbon fibre reinforced composite for fire safety engineering.

  8. Planetary Transmission Diagnostics

    NASA Technical Reports Server (NTRS)

    Lewicki, David G. (Technical Monitor); Samuel, Paul D.; Conroy, Joseph K.; Pines, Darryll J.

    2004-01-01

    This report presents a methodology for detecting and diagnosing gear faults in the planetary stage of a helicopter transmission. This diagnostic technique is based on the constrained adaptive lifting algorithm. The lifting scheme, developed by Wim Sweldens of Bell Labs, is a time domain, prediction-error realization of the wavelet transform that allows for greater flexibility in the construction of wavelet bases. Classic lifting analyzes a given signal using wavelets derived from a single fundamental basis function. A number of researchers have proposed techniques for adding adaptivity to the lifting scheme, allowing the transform to choose from a set of fundamental bases the basis that best fits the signal. This characteristic is desirable for gear diagnostics as it allows the technique to tailor itself to a specific transmission by selecting a set of wavelets that best represent vibration signals obtained while the gearbox is operating under healthy-state conditions. However, constraints on certain basis characteristics are necessary to enhance the detection of local wave-form changes caused by certain types of gear damage. The proposed methodology analyzes individual tooth-mesh waveforms from a healthy-state gearbox vibration signal that was generated using the vibration separation (synchronous signal-averaging) algorithm. Each waveform is separated into analysis domains using zeros of its slope and curvature. The bases selected in each analysis domain are chosen to minimize the prediction error, and constrained to have the same-sign local slope and curvature as the original signal. The resulting set of bases is used to analyze future-state vibration signals and the lifting prediction error is inspected. The constraints allow the transform to effectively adapt to global amplitude changes, yielding small prediction errors. However, local wave-form changes associated with certain types of gear damage are poorly adapted, causing a significant change in the prediction error. The constrained adaptive lifting diagnostic algorithm is validated using data collected from the University of Maryland Transmission Test Rig and the results are discussed.

  9. 1RM prediction: a novel methodology based on the force-velocity and load-velocity relationships.

    PubMed

    Picerno, Pietro; Iannetta, Danilo; Comotto, Stefania; Donati, Marco; Pecoraro, Fabrizio; Zok, Mounir; Tollis, Giorgio; Figura, Marco; Varalda, Carlo; Di Muzio, Davide; Patrizio, Federica; Piacentini, Maria Francesca

    2016-10-01

    This study aimed to evaluate the accuracy of a novel approach for predicting the one-repetition maximum (1RM). The prediction is based on the force-velocity and load-velocity relationships determined from measured force and velocity data collected during resistance-training exercises with incremental submaximal loads. 1RM was determined as the load corresponding to the intersection of these two curves, where the gravitational force exceeds the force that the subject can exert. The proposed force-velocity-based method (FVM) was tested on 37 participants (23.9 ± 3.1 year; BMI 23.44 ± 2.45) with no specific resistance-training experience, and the predicted 1RM was compared to that achieved using a direct method (DM) in chest-press (CP) and leg-press (LP) exercises. The mean 1RM in CP was 99.5 kg (±27.0) for DM and 100.8 kg (±27.2) for FVM (SEE = 1.2 kg), whereas the mean 1RM in LP was 249.3 kg (±60.2) for DM and 251.1 kg (±60.3) for FVM (SEE = 2.1 kg). A high correlation was found between the two methods for both CP and LP exercises (0.999, p < 0.001). Good agreement between the two methods emerged from the Bland and Altman plot analysis. These findings suggest the use of the proposed methodology as a valid alternative to other indirect approaches for 1RM prediction. The mathematical construct is simply based on the definition of the 1RM, and it is fed with subject's muscle strength capacities measured during a specific exercise. Its reliability is, thus, expected to be not affected by those factors that typically jeopardize regression-based approaches.

  10. Prediction of physical workload in reduced gravity environments

    NASA Technical Reports Server (NTRS)

    Goldberg, Joseph H.

    1987-01-01

    The background, development, and application of a methodology to predict human energy expenditure and physical workload in low gravity environments, such as a Lunar or Martian base, is described. Based on a validated model to predict energy expenditures in Earth-based industrial jobs, the model relies on an elemental analysis of the proposed job. Because the job itself need not physically exist, many alternative job designs may be compared in their physical workload. The feasibility of using the model for prediction of low gravity work was evaluated by lowering body and load weights, while maintaining basal energy expenditure. Comparison of model results was made both with simulated low gravity energy expenditure studies and with reported Apollo 14 Lunar EVA expenditure. Prediction accuracy was very good for walking and for cart pulling on slopes less than 15 deg, but the model underpredicted the most difficult work conditions. This model was applied to example core sampling and facility construction jobs, as presently conceptualized for a Lunar or Martian base. Resultant energy expenditures and suggested work-rest cycles were well within the range of moderate work difficulty. Future model development requirements were also discussed.

  11. Reliability modelling and analysis of thermal MEMS

    NASA Astrophysics Data System (ADS)

    Muratet, Sylvaine; Lavu, Srikanth; Fourniols, Jean-Yves; Bell, George; Desmulliez, Marc P. Y.

    2006-04-01

    This paper presents a MEMS reliability study methodology based on the novel concept of 'virtual prototyping'. This methodology can be used for the development of reliable sensors or actuators and also to characterize their behaviour in specific use conditions and applications. The methodology is demonstrated on the U-shaped micro electro thermal actuator used as test vehicle. To demonstrate this approach, a 'virtual prototype' has been developed with the modeling tools MatLab and VHDL-AMS. A best practice FMEA (Failure Mode and Effect Analysis) is applied on the thermal MEMS to investigate and assess the failure mechanisms. Reliability study is performed by injecting the identified defaults into the 'virtual prototype'. The reliability characterization methodology predicts the evolution of the behavior of these MEMS as a function of the number of cycles of operation and specific operational conditions.

  12. Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels

    PubMed Central

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262

  13. Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.

    PubMed

    Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel

    2015-01-01

    A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.

  14. A Progressive Damage Methodology for Residual Strength Predictions of Notched Composite Panels

    NASA Technical Reports Server (NTRS)

    Coats, Timothy W.; Harris, Charles E.

    1998-01-01

    The translaminate fracture behavior of carbon/epoxy structural laminates with through-penetration notches was investigated to develop a residual strength prediction methodology for composite structures. An experimental characterization of several composite materials systems revealed a fracture resistance behavior that was very similar to the R-curve behavior exhibited by ductile metals. Fractographic examinations led to the postulate that the damage growth resistance was primarily due to fractured fibers in the principal load-carrying plies being bridged by intact fibers of the adjacent plies. The load transfer associated with this bridging mechanism suggests that a progressive damage analysis methodology will be appropriate for predicting the residual strength of laminates with through-penetration notches. A progressive damage methodology developed by the authors was used to predict the initiation and growth of matrix cracks and fiber fracture. Most of the residual strength predictions for different panel widths, notch lengths, and material systems were within about 10% of the experimental failure loads.

  15. A Microstructure-Based Time-Dependent Crack Growth Model for Life and Reliability Prediction of Turbopropulsion Systems

    NASA Astrophysics Data System (ADS)

    Chan, Kwai S.; Enright, Michael P.; Moody, Jonathan; Fitch, Simeon H. K.

    2014-01-01

    The objective of this investigation was to develop an innovative methodology for life and reliability prediction of hot-section components in advanced turbopropulsion systems. A set of generic microstructure-based time-dependent crack growth (TDCG) models was developed and used to assess the sources of material variability due to microstructure and material parameters such as grain size, activation energy, and crack growth threshold for TDCG. A comparison of model predictions and experimental data obtained in air and in vacuum suggests that oxidation is responsible for higher crack growth rates at high temperatures, low frequencies, and long dwell times, but oxidation can also induce higher crack growth thresholds (Δ K th or K th) under certain conditions. Using the enhanced risk analysis tool and material constants calibrated to IN 718 data, the effect of TDCG on the risk of fracture in turboengine components was demonstrated for a generic rotor design and a realistic mission profile using the DARWIN® probabilistic life-prediction code. The results of this investigation confirmed that TDCG and cycle-dependent crack growth in IN 718 can be treated by a simple summation of the crack increments over a mission. For the temperatures considered, TDCG in IN 718 can be considered as a K-controlled or a diffusion-controlled oxidation-induced degradation process. This methodology provides a pathway for evaluating microstructural effects on multiple damage modes in hot-section components.

  16. Effects of inductive bias on computational evaluations of ligand-based modeling and on drug discovery

    NASA Astrophysics Data System (ADS)

    Cleves, Ann E.; Jain, Ajay N.

    2008-03-01

    Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.

  17. A study of pressure-based methodology for resonant flows in non-linear combustion instabilities

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Pindera, M. Z.; Przekwas, A. J.; Tucker, K.

    1992-01-01

    This paper presents a systematic assessment of a large variety of spatial and temporal differencing schemes on nonstaggered grids by the pressure-based methods for the problems of fast transient flows. The observation from the present study is that for steady state flow problems, pressure-based methods can be very competitive with the density-based methods. For transient flow problems, pressure-based methods utilizing the same differencing scheme are less accurate, even though the wave speeds are correctly predicted.

  18. Aerodynamic characteristics of cruciform missiles at high angles of attack

    NASA Technical Reports Server (NTRS)

    Lesieutre, Daniel J.; Mendenhall, Michael R.; Nazario, Susana M.; Hemsch, Michael J.

    1987-01-01

    An aerodynamic prediction method for missile aerodynamic performance and preliminary design has been developed to utilize a newly available systematic fin data base and an improved equivalent angle of attack methodology. The method predicts total aerodynamic loads and individual fin forces and moments for body-tail (wing-body) and canard-body-tail configurations with cruciform fin arrangements. The data base and the prediction method are valid for angles of attack up to 45 deg, arbitrary roll angles, fin deflection angles between -40 deg and 40 deg, Mach numbers between 0.6 and 4.5, and fin aspect ratios between 0.25 and 4.0. The equivalent angle of attack concept is employed to include the effects of vorticity and geometric scaling.

  19. Shuttle payload bay dynamic environments: Summary and conclusion report for STS flights 1-5 and 9

    NASA Technical Reports Server (NTRS)

    Oconnell, M.; Garba, J.; Kern, D.

    1984-01-01

    The vibration, acoustic and low frequency loads data from the first 5 shuttle flights are presented. The engineering analysis of that data is also presented. Vibroacoustic data from STS-9 are also presented because they represent the only data taken on a large payload. Payload dynamic environment predictions developed by the participation of various NASA and industrial centers are presented along with a comparison of analytical loads methodology predictions with flight data, including a brief description of the methodologies employed in developing those predictions for payloads. The review of prediction methodologies illustrates how different centers have approached the problems of developing shuttle dynamic environmental predictions and criteria. Ongoing research activities related to the shuttle dynamic environments are also described. Analytical software recently developed for the prediction of payload acoustic and vibration environments are also described.

  20. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  1. Finding clusters of similar events within clinical incident reports: a novel methodology combining case based reasoning and information retrieval

    PubMed Central

    Tsatsoulis, C; Amthauer, H

    2003-01-01

    A novel methodological approach for identifying clusters of similar medical incidents by analyzing large databases of incident reports is described. The discovery of similar events allows the identification of patterns and trends, and makes possible the prediction of future events and the establishment of barriers and best practices. Two techniques from the fields of information science and artificial intelligence have been integrated—namely, case based reasoning and information retrieval—and very good clustering accuracies have been achieved on a test data set of incident reports from transfusion medicine. This work suggests that clustering should integrate the features of an incident captured in traditional form based records together with the detailed information found in the narrative included in event reports. PMID:14645892

  2. Remaining dischargeable time prediction for lithium-ion batteries using unscented Kalman filter

    NASA Astrophysics Data System (ADS)

    Dong, Guangzhong; Wei, Jingwen; Chen, Zonghai; Sun, Han; Yu, Xiaowei

    2017-10-01

    To overcome the range anxiety, one of the important strategies is to accurately predict the range or dischargeable time of the battery system. To accurately predict the remaining dischargeable time (RDT) of a battery, a RDT prediction framework based on accurate battery modeling and state estimation is presented in this paper. Firstly, a simplified linearized equivalent-circuit-model is developed to simulate the dynamic characteristics of a battery. Then, an online recursive least-square-algorithm method and unscented-Kalman-filter are employed to estimate the system matrices and SOC at every prediction point. Besides, a discrete wavelet transform technique is employed to capture the statistical information of past dynamics of input currents, which are utilized to predict the future battery currents. Finally, the RDT can be predicted based on the battery model, SOC estimation results and predicted future battery currents. The performance of the proposed methodology has been verified by a lithium-ion battery cell. Experimental results indicate that the proposed method can provide an accurate SOC and parameter estimation and the predicted RDT can solve the range anxiety issues.

  3. Computational biology for cardiovascular biomarker discovery.

    PubMed

    Azuaje, Francisco; Devaux, Yvan; Wagner, Daniel

    2009-07-01

    Computational biology is essential in the process of translating biological knowledge into clinical practice, as well as in the understanding of biological phenomena based on the resources and technologies originating from the clinical environment. One such key contribution of computational biology is the discovery of biomarkers for predicting clinical outcomes using 'omic' information. This process involves the predictive modelling and integration of different types of data and knowledge for screening, diagnostic or prognostic purposes. Moreover, this requires the design and combination of different methodologies based on statistical analysis and machine learning. This article introduces key computational approaches and applications to biomarker discovery based on different types of 'omic' data. Although we emphasize applications in cardiovascular research, the computational requirements and advances discussed here are also relevant to other domains. We will start by introducing some of the contributions of computational biology to translational research, followed by an overview of methods and technologies used for the identification of biomarkers with predictive or classification value. The main types of 'omic' approaches to biomarker discovery will be presented with specific examples from cardiovascular research. This will include a review of computational methodologies for single-source and integrative data applications. Major computational methods for model evaluation will be described together with recommendations for reporting models and results. We will present recent advances in cardiovascular biomarker discovery based on the combination of gene expression and functional network analyses. The review will conclude with a discussion of key challenges for computational biology, including perspectives from the biosciences and clinical areas.

  4. Why Sex Based Language Differences are Elusive.

    ERIC Educational Resources Information Center

    Tyler, Mary

    Paradoxically, linguists' speculations about sex differences in language use are highly plausible and yet have received little empirical support from well controlled studies. An experiment was designed to correct a flaw in earlier methodologies by sampling precisely the kinds of situations in which predicted differences (e.g., swearing,…

  5. Predicting Entrepreneurial Motivation among University Students: The Role of Entrepreneurship Education

    ERIC Educational Resources Information Center

    Farhangmehr, Minoo; Gonçalves, Paulo; Sarmento, Maria

    2016-01-01

    Purpose: The purpose of this paper is to better understand the main drivers of entrepreneurial motivation among university students and to determine whether entrepreneurship education has a moderating effect on improving the impact of knowledge base and entrepreneurship competencies on entrepreneurial motivation. Design/methodology/approach: This…

  6. The weather roulette: assessing the economic value of seasonal wind speed predictions

    NASA Astrophysics Data System (ADS)

    Christel, Isadora; Cortesi, Nicola; Torralba-Fernandez, Veronica; Soret, Albert; Gonzalez-Reviriego, Nube; Doblas-Reyes, Francisco

    2016-04-01

    Climate prediction is an emerging and highly innovative research area. For the wind energy sector, predicting the future variability of wind resources over the coming weeks or seasons is especially relevant to quantify operation and maintenance logistic costs or to inform energy trading decision with potential cost savings and/or economic benefits. Recent advances in climate predictions have already shown that probabilistic forecasting can improve the current prediction practices, which are based in the use of retrospective climatology and the assumption that what happened in the past is the best estimation of future conditions. Energy decision makers now have this new set of climate services but, are they willing to use them? Our aim is to properly explain the potential economic benefits of adopting probabilistic predictions, compared with the current practice, by using the weather roulette methodology (Hagedorn & Smith, 2009). This methodology is a diagnostic tool created to inform in a more intuitive and relevant way about the skill and usefulness of a forecast in the decision making process, by providing an economic and financial oriented assessment of the benefits of using a particular forecast system. We have selected a region relevant to the energy stakeholders where the predictions of the EUPORIAS climate service prototype for the energy sector (RESILIENCE) are skillful. In this region, we have applied the weather roulette to compare the overall prediction success of RESILIENCE's predictions and climatology illustrating it as an effective interest rate, an economic term that is easier to understand for energy stakeholders.

  7. Development of Multi-physics (Multiphase CFD + MCNP) simulation for generic solution vessel power calculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Seung Jun; Buechler, Cynthia Eileen

    The current study aims to predict the steady state power of a generic solution vessel and to develop a corresponding heat transfer coefficient correlation for a Moly99 production facility by conducting a fully coupled multi-physics simulation. A prediction of steady state power for the current application is inherently interconnected between thermal hydraulic characteristics (i.e. Multiphase computational fluid dynamics solved by ANSYS-Fluent 17.2) and the corresponding neutronic behavior (i.e. particle transport solved by MCNP6.2) in the solution vessel. Thus, the development of a coupling methodology is vital to understand the system behavior at a variety of system design and postulated operatingmore » scenarios. In this study, we report on the k-effective (keff) calculation for the baseline solution vessel configuration with a selected solution concentration using MCNP K-code modeling. The associated correlation of thermal properties (e.g. density, viscosity, thermal conductivity, specific heat) at the selected solution concentration are developed based on existing experimental measurements in the open literature. The numerical coupling methodology between multiphase CFD and MCNP is successfully demonstrated, and the detailed coupling procedure is documented. In addition, improved coupling methods capturing realistic physics in the solution vessel thermal-neutronic dynamics are proposed and tested further (i.e. dynamic height adjustment, mull-cell approach). As a key outcome of the current study, a multi-physics coupling methodology between MCFD and MCNP is demonstrated and tested for four different operating conditions. Those different operating conditions are determined based on the neutron source strength at a fixed geometry condition. The steady state powers for the generic solution vessel at various operating conditions are reported, and a generalized correlation of the heat transfer coefficient for the current application is discussed. The assessment of multi-physics methodology and preliminary results from various coupled calculations (power prediction and heat transfer coefficient) can be further utilized for the system code validation and generic solution vessel design improvement.« less

  8. Crack Growth Simulation and Residual Strength Prediction in Airplane Fuselages

    NASA Technical Reports Server (NTRS)

    Chen, Chuin-Shan; Wawrzynek, Paul A.; Ingraffea, Anthony R.

    1999-01-01

    This is the final report for the NASA funded project entitled "Crack Growth Prediction Methodology for Multi-Site Damage." The primary objective of the project was to create a capability to simulate curvilinear fatigue crack growth and ductile tearing in aircraft fuselages subjected to widespread fatigue damage. The second objective was to validate the capability by way of comparisons to experimental results. Both objectives have been achieved and the results are detailed herein. In the first part of the report, the crack tip opening angle (CTOA) fracture criterion, obtained and correlated from coupon tests to predict fracture behavior and residual strength of built-up aircraft fuselages, is discussed. Geometrically nonlinear, elastic-plastic, thin shell finite element analyses are used to simulate stable crack growth and to predict residual strength. Both measured and predicted results of laboratory flat panel tests and full-scale fuselage panel tests show substantial reduction of residual strength due to the occurrence of multi-site damage (MSD). Detailed comparisons of n stable crack growth history, and residual strength between the predicted and experimental results are used to assess the validity of the analysis methodology. In the second part of the report, issues related to crack trajectory prediction in thin shells; an evolving methodology uses the crack turning phenomenon to improve the structural integrity of aircraft structures are discussed, A directional criterion is developed based on the maximum tangential stress theory, but taking into account the effect of T-stress and fracture toughness orthotropy. Possible extensions of the current crack growth directional criterion to handle geometrically and materially nonlinear problems are discussed. The path independent contour integral method for T-stress evaluation is derived and its accuracy is assessed using a p- and hp-version adaptive finite element method. Curvilinear crack growth is simulated in coupon tests and in full-scale fuselage panel tests. Both T-stress and fracture toughness orthotropy are found to be essential to predict the observed crack paths. The analysis methodology and software program (FRANC3D/STAGS) developed herein allows engineers to maintain aging aircraft economically while insuring continuous airworthiness. Consequently, it will improve the technology to support the safe operation of the current aircraft fleet as well as the design of more damage-tolerant aircraft for the next generation fleet.

  9. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actualmore » experimental observations.« less

  10. Knowing what to sell, when, and to whom.

    PubMed

    Kumar, V; Venkatesan, Rajkumar; Reinartz, Werner

    2006-03-01

    Despite an abundance of data, most companies do a poor job of predicting the behavior of their customers. In fact, the authors' research suggests that even companies that take the greatest trouble over their predictions about whether a particular customer will buy a particular product are correct only around 55% of the time--a result that hardly justifies the costs of having a CRM system in the first place. Businesses usually conclude from studies like this that it's impossible to use the past to predict the future, so they revert to the timeworn marketing practice of inundating their customers with offers. But as the authors explain, the reason for the poor predictions is not any basic limitation of CRM systems or the predictive power of past behavior, but rather of the mathematical methods that companies use to interpret the data. The authors have developed a new way of predicting customer behavior, based on the work of the Nobel Prize-winning economist Daniel McFadden, that delivers vastly improved results. Indeed, the methodology increases the odds of successfully predicting a specific purchase by a specific customer at a specific time to about 85%, a number that will have a major impact on any company's marketing ROI. What's more, using this methodology, companies can increase revenues while reducing their frequency of customer contact-evidence that overcommunication with customers may actually damage a company's sales.

  11. Criticality Characteristics of Current Oil Price Dynamics

    NASA Astrophysics Data System (ADS)

    Drożdż, S.; Kwapień, J.; Oświęcimka, P.

    2008-10-01

    Methodology that recently leads us to predict to an amazing accuracy the date (July 11, 2008) of reverse of the oil price up trend is briefly summarized and some further aspects of the related oil price dynamics elaborated. This methodology is based on the concept of discrete scale invariance whose finance-prediction-oriented variant involves such elements as log-periodic self-similarity, the universal preferred scaling factor λ≈2, and allows a phenomenon of the "super-bubble". From this perspective the present (as of August 22, 2008) violent - but still log-periodically decelerating - decrease of the oil prices is associated with the decay of such a "super-bubble" that has started developing about one year ago on top of the longer-term oil price increasing phase (normal bubble) whose ultimate termination is evaluated to occur in around mid 2010.

  12. Current Trends in Modeling Research for Turbulent Aerodynamic Flows

    NASA Technical Reports Server (NTRS)

    Gatski, Thomas B.; Rumsey, Christopher L.; Manceau, Remi

    2007-01-01

    The engineering tools of choice for the computation of practical engineering flows have begun to migrate from those based on the traditional Reynolds-averaged Navier-Stokes approach to methodologies capable, in theory if not in practice, of accurately predicting some instantaneous scales of motion in the flow. The migration has largely been driven by both the success of Reynolds-averaged methods over a wide variety of flows as well as the inherent limitations of the method itself. Practitioners, emboldened by their ability to predict a wide-variety of statistically steady, equilibrium turbulent flows, have now turned their attention to flow control and non-equilibrium flows, that is, separation control. This review gives some current priorities in traditional Reynolds-averaged modeling research as well as some methodologies being applied to a new class of turbulent flow control problems.

  13. Modeling of biosorption of Cu(II) by alkali-modified spent tea leaves using response surface methodology (RSM) and artificial neural network (ANN)

    NASA Astrophysics Data System (ADS)

    Ghosh, Arpita; Das, Papita; Sinha, Keka

    2015-06-01

    In the present work, spent tea leaves were modified with Ca(OH)2 and used as a new, non-conventional and low-cost biosorbent for the removal of Cu(II) from aqueous solution. Response surface methodology (RSM) and artificial neural network (ANN) were used to develop predictive models for simulation and optimization of the biosorption process. The influence of process parameters (pH, biosorbent dose and reaction time) on the biosorption efficiency was investigated through a two-level three-factor (23) full factorial central composite design with the help of Design Expert. The same design was also used to obtain a training set for ANN. Finally, both modeling methodologies were statistically compared by the root mean square error and absolute average deviation based on the validation data set. Results suggest that RSM has better prediction performance as compared to ANN. The biosorption followed Langmuir adsorption isotherm and it followed pseudo-second-order kinetic. The optimum removal efficiency of the adsorbent was found as 96.12 %.

  14. Regression to fuzziness method for estimation of remaining useful life in power plant components

    NASA Astrophysics Data System (ADS)

    Alamaniotis, Miltiadis; Grelle, Austin; Tsoukalas, Lefteri H.

    2014-10-01

    Mitigation of severe accidents in power plants requires the reliable operation of all systems and the on-time replacement of mechanical components. Therefore, the continuous surveillance of power systems is a crucial concern for the overall safety, cost control, and on-time maintenance of a power plant. In this paper a methodology called regression to fuzziness is presented that estimates the remaining useful life (RUL) of power plant components. The RUL is defined as the difference between the time that a measurement was taken and the estimated failure time of that component. The methodology aims to compensate for a potential lack of historical data by modeling an expert's operational experience and expertise applied to the system. It initially identifies critical degradation parameters and their associated value range. Once completed, the operator's experience is modeled through fuzzy sets which span the entire parameter range. This model is then synergistically used with linear regression and a component's failure point to estimate the RUL. The proposed methodology is tested on estimating the RUL of a turbine (the basic electrical generating component of a power plant) in three different cases. Results demonstrate the benefits of the methodology for components for which operational data is not readily available and emphasize the significance of the selection of fuzzy sets and the effect of knowledge representation on the predicted output. To verify the effectiveness of the methodology, it was benchmarked against the data-based simple linear regression model used for predictions which was shown to perform equal or worse than the presented methodology. Furthermore, methodology comparison highlighted the improvement in estimation offered by the adoption of appropriate of fuzzy sets for parameter representation.

  15. Toward autonomous spacecraft

    NASA Technical Reports Server (NTRS)

    Fogel, L. J.; Calabrese, P. G.; Walsh, M. J.; Owens, A. J.

    1982-01-01

    Ways in which autonomous behavior of spacecraft can be extended to treat situations wherein a closed loop control by a human may not be appropriate or even possible are explored. Predictive models that minimize mean least squared error and arbitrary cost functions are discussed. A methodology for extracting cyclic components for an arbitrary environment with respect to usual and arbitrary criteria is developed. An approach to prediction and control based on evolutionary programming is outlined. A computer program capable of predicting time series is presented. A design of a control system for a robotic dense with partially unknown physical properties is presented.

  16. Scenario-based, closed-loop model predictive control with application to emergency vehicle scheduling

    NASA Astrophysics Data System (ADS)

    Goodwin, Graham. C.; Medioli, Adrian. M.

    2013-08-01

    Model predictive control has been a major success story in process control. More recently, the methodology has been used in other contexts, including automotive engine control, power electronics and telecommunications. Most applications focus on set-point tracking and use single-sequence optimisation. Here we consider an alternative class of problems motivated by the scheduling of emergency vehicles. Here disturbances are the dominant feature. We develop a novel closed-loop model predictive control strategy aimed at this class of problems. We motivate, and illustrate, the ideas via the problem of fluid deployment of ambulance resources.

  17. Analysis of Turbulent Boundary-Layer over Rough Surfaces with Application to Projectile Aerodynamics

    DTIC Science & Technology

    1988-12-01

    12 V. APPLICATION IN COMPONENT BUILD-UP METHODOLOGIES ....................... 12 1. COMPONENT BUILD-UP IN DRAG...dimensional roughness. II. CLASSIFICATION OF PREDICTION METHODS Prediction methods can be classified into two main approache-: 1) Correlation methodologies ...data are availaNe. V. APPLICATION IN COMPONENT BUILD-UP METHODOLOGIES 1. COMPONENT BUILD-UP IN DRAG The new correlation can be used for an engine.ring

  18. Methodology for designing accelerated aging tests for predicting life of photovoltaic arrays

    NASA Technical Reports Server (NTRS)

    Gaines, G. B.; Thomas, R. E.; Derringer, G. C.; Kistler, C. W.; Bigg, D. M.; Carmichael, D. C.

    1977-01-01

    A methodology for designing aging tests in which life prediction was paramount was developed. The methodology builds upon experience with regard to aging behavior in those material classes which are expected to be utilized as encapsulant elements, viz., glasses and polymers, and upon experience with the design of aging tests. The experiences were reviewed, and results are discussed in detail.

  19. Acoustic methodology review

    NASA Technical Reports Server (NTRS)

    Schlegel, R. G.

    1982-01-01

    It is important for industry and NASA to assess the status of acoustic design technology for predicting and controlling helicopter external noise in order for a meaningful research program to be formulated which will address this problem. The prediction methodologies available to the designer and the acoustic engineer are three-fold. First is what has been described as a first principle analysis. This analysis approach attempts to remove any empiricism from the analysis process and deals with a theoretical mechanism approach to predicting the noise. The second approach attempts to combine first principle methodology (when available) with empirical data to formulate source predictors which can be combined to predict vehicle levels. The third is an empirical analysis, which attempts to generalize measured trends into a vehicle noise prediction method. This paper will briefly address each.

  20. A life prediction methodology for encapsulated solar cells

    NASA Technical Reports Server (NTRS)

    Coulbert, C. D.

    1978-01-01

    This paper presents an approach to the development of a life prediction methodology for encapsulated solar cells which are intended to operate for twenty years or more in a terrestrial environment. Such a methodology, or solar cell life prediction model, requires the development of quantitative intermediate relationships between local environmental stress parameters and the basic chemical mechanisms of encapsulant aging leading to solar cell failures. The use of accelerated/abbreviated testing to develop these intermediate relationships and in revealing failure modes is discussed. Current field and demonstration tests of solar cell arrays and the present laboratory tests to qualify solar module designs provide very little data applicable to predicting the long-term performance of encapsulated solar cells. An approach to enhancing the value of such field tests to provide data for life prediction is described.

  1. Propellant Readiness Level: A Methodological Approach to Propellant Characterization

    NASA Technical Reports Server (NTRS)

    Bossard, John A.; Rhys, Noah O.

    2010-01-01

    A methodological approach to defining propellant characterization is presented. The method is based on the well-established Technology Readiness Level nomenclature. This approach establishes the Propellant Readiness Level as a metric for ascertaining the readiness of a propellant or a propellant combination by evaluating the following set of propellant characteristics: thermodynamic data, toxicity, applications, combustion data, heat transfer data, material compatibility, analytical prediction modeling, injector/chamber geometry, pressurization, ignition, combustion stability, system storability, qualification testing, and flight capability. The methodology is meant to be applicable to all propellants or propellant combinations; liquid, solid, and gaseous propellants as well as monopropellants and propellant combinations are equally served. The functionality of the proposed approach is tested through the evaluation and comparison of an example set of hydrocarbon fuels.

  2. Impact of personal goals on the internal medicine R4 subspecialty match: a Q methodology study.

    PubMed

    Daniels, Vijay J; Kassam, Narmin

    2013-12-21

    There has been a decline in interest in general internal medicine that has resulted in a discrepancy between internal medicine residents' choice in the R4 subspecialty match and societal need. Few studies have focused on the relative importance of personal goals and their impact on residents' choice. The purpose of this study was to assess if internal medicine residents can be grouped based on their personal goals and how each group prioritizes these goals compared to each other. A secondary objective was to explore whether we could predict a resident's desired subspecialty choice based on their constellation of personal goals. We used Q methodology to examine how postgraduate year 1-3 internal medicine residents could be grouped based on their rankings of 36 statements (derived from our previous qualitative study). Using each groups' defining and distinguishing statements, we predicted their subspecialties of interest. We also collected the residents' first choice in the subspecialty match and used a kappa test to compare our predicted subspecialty group to the residents' self-reported first choice. Fifty-nine internal medicine residents at the University of Alberta participated between 2009 and 2010 with 46 Q sorts suitable for analysis. The residents loaded onto four factors (groups) based on how they ranked statements. Our prediction of each groups' desired subspecialties with their defining and/or distinguishing statements are as follows: group 1 - general internal medicine (variety in practice); group 2 - gastroenterology, nephrology, and respirology (higher income); group 3 - cardiology and critical care (procedural, willing to entertain longer training); group 4 - rest of subspecialties (non-procedural, focused practice, and valuing more time for personal life). There was moderate agreement (kappa = 0.57) between our predicted desired subspecialty group and residents' self-reported first choice (p < 0.001). This study suggests that most residents fall into four groups based on a constellation of personal goals when choosing an internal medicine subspecialty. The key goals that define and/or distinguish between these groups are breadth of practice, lifestyle, desire to do procedures, length of training, and future income potential. Using these groups, we were able to predict residents' first subspecialty group with moderate success.

  3. Applying psychological theories to evidence-based clinical practice: identifying factors predictive of placing preventive fissure sealants.

    PubMed

    Bonetti, Debbie; Johnston, Marie; Clarkson, Jan E; Grimshaw, Jeremy; Pitts, Nigel B; Eccles, Martin; Steen, Nick; Thomas, Ruth; Maclennan, Graeme; Glidewell, Liz; Walker, Anne

    2010-04-08

    Psychological models are used to understand and predict behaviour in a wide range of settings, but have not been consistently applied to health professional behaviours, and the contribution of differing theories is not clear. This study explored the usefulness of a range of models to predict an evidence-based behaviour -- the placing of fissure sealants. Measures were collected by postal questionnaire from a random sample of general dental practitioners (GDPs) in Scotland. Outcomes were behavioural simulation (scenario decision-making), and behavioural intention. Predictor variables were from the Theory of Planned Behaviour (TPB), Social Cognitive Theory (SCT), Common Sense Self-regulation Model (CS-SRM), Operant Learning Theory (OLT), Implementation Intention (II), Stage Model, and knowledge (a non-theoretical construct). Multiple regression analysis was used to examine the predictive value of each theoretical model individually. Significant constructs from all theories were then entered into a 'cross theory' stepwise regression analysis to investigate their combined predictive value. Behavioural simulation - theory level variance explained was: TPB 31%; SCT 29%; II 7%; OLT 30%. Neither CS-SRM nor stage explained significant variance. In the cross theory analysis, habit (OLT), timeline acute (CS-SRM), and outcome expectancy (SCT) entered the equation, together explaining 38% of the variance. Behavioural intention - theory level variance explained was: TPB 30%; SCT 24%; OLT 58%, CS-SRM 27%. GDPs in the action stage had significantly higher intention to place fissure sealants. In the cross theory analysis, habit (OLT) and attitude (TPB) entered the equation, together explaining 68% of the variance in intention. The study provides evidence that psychological models can be useful in understanding and predicting clinical behaviour. Taking a theory-based approach enables the creation of a replicable methodology for identifying factors that may predict clinical behaviour and so provide possible targets for knowledge translation interventions. Results suggest that more evidence-based behaviour may be achieved by influencing beliefs about the positive outcomes of placing fissure sealants and building a habit of placing them as part of patient management. However a number of conceptual and methodological challenges remain.

  4. Reporting and methodological quality of meta-analyses in urological literature

    PubMed Central

    Xu, Jing

    2017-01-01

    Purpose To assess the overall quality of published urological meta-analyses and identify predictive factors for high quality. Materials and Methods We systematically searched PubMed to identify meta-analyses published from January 1st, 2011 to December 31st, 2015 in 10 predetermined major paper-based urology journals. The characteristics of the included meta-analyses were collected, and their reporting and methodological qualities were assessed by the PRISMA checklist (27 items) and AMSTAR tool (11 items), respectively. Descriptive statistics were used for individual items as a measure of overall compliance, and PRISMA and AMSTAR scores were calculated as the sum of adequately reported domains. Logistic regression was used to identify predictive factors for high qualities. Results A total of 183 meta-analyses were included. The mean PRISMA and AMSTAR scores were 22.74 ± 2.04 and 7.57 ± 1.41, respectively. PRISMA item 5, protocol and registration, items 15 and 22, risk of bias across studies, items 16 and 23, additional analysis had less than 50% adherence. AMSTAR item 1, “a priori” design, item 5, list of studies and item 10, publication bias had less than 50% adherence. Logistic regression analyses showed that funding support and “a priori” design were associated with superior reporting quality, following PRISMA guideline and “a priori” design were associated with superior methodological quality. Conclusions Reporting and methodological qualities of recently published meta-analyses in major paper-based urology journals are generally good. Further improvement could potentially be achieved by strictly adhering to PRISMA guideline and having “a priori” protocol. PMID:28439452

  5. A Methodology and Software Environment for Testing Process Model’s Sequential Predictions with Protocols

    DTIC Science & Technology

    1992-12-21

    in preparation). Foundations of artificial intelligence. Cambridge, MA: MIT Press. O’Reilly, R. C. (1991). X3DNet: An X- Based Neural Network ...2.2.3 Trace based protocol analysis 19 2.2A Summary of important data features 21 2.3 Tools related to process model testing 23 2.3.1 Tools for building...algorithm 57 3. Requirements for testing process models using trace based protocol 59 analysis 3.1 Definition of trace based protocol analysis (TBPA) 59

  6. Progressive failure methodologies for predicting residual strength and life of laminated composites

    NASA Technical Reports Server (NTRS)

    Harris, Charles E.; Allen, David H.; Obrien, T. Kevin

    1991-01-01

    Two progressive failure methodologies currently under development by the Mechanics of Materials Branch at NASA Langley Research Center are discussed. The damage tolerance/fail safety methodology developed by O'Brien is an engineering approach to ensuring adequate durability and damage tolerance by treating only delamination onset and the subsequent delamination accumulation through the laminate thickness. The continuum damage model developed by Allen and Harris employs continuum damage laws to predict laminate strength and life. The philosophy, mechanics framework, and current implementation status of each methodology are presented.

  7. Predicting losing and gaining river reaches in lowland New Zealand based on a statistical methodology

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Zammit, Christian; Dudley, Bruce

    2017-04-01

    The phenomenon of losing and gaining in rivers normally takes place in lowland where often there are various, sometimes conflicting uses for water resources, e.g., agriculture, industry, recreation, and maintenance of ecosystem function. To better support water allocation decisions, it is crucial to understand the location and seasonal dynamics of these losses and gains. We present a statistical methodology to predict losing and gaining river reaches in New Zealand based on 1) information surveys with surface water and groundwater experts from regional government, 2) A collection of river/watershed characteristics, including climate, soil and hydrogeologic information, and 3) the random forests technique. The surveys on losing and gaining reaches were conducted face-to-face at 16 New Zealand regional government authorities, and climate, soil, river geometry, and hydrogeologic data from various sources were collected and compiled to represent river/watershed characteristics. The random forests technique was used to build up the statistical relationship between river reach status (gain and loss) and river/watershed characteristics, and then to predict for river reaches at Strahler order one without prior losing and gaining information. Results show that the model has a classification error of around 10% for "gain" and "loss". The results will assist further research, and water allocation decisions in lowland New Zealand.

  8. Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data

    NASA Astrophysics Data System (ADS)

    Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.

    2011-11-01

    Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.

  9. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 1: Methodology and applications

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for designs failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  10. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes, These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  11. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples, volume 1

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  12. Development of Probabilistic Life Prediction Methodologies and Testing Strategies for MEMS and CMC's

    NASA Technical Reports Server (NTRS)

    Jadaan, Osama

    2003-01-01

    This effort is to investigate probabilistic life prediction methodologies for ceramic matrix composites and MicroElectroMechanical Systems (MEMS) and to analyze designs that determine stochastic properties of MEMS. For CMC's this includes a brief literature survey regarding lifing methodologies. Also of interest for MEMS is the design of a proper test for the Weibull size effect in thin film (bulge test) specimens. The Weibull size effect is a consequence of a stochastic strength response predicted from the Weibull distribution. Confirming that MEMS strength is controlled by the Weibull distribution will enable the development of a probabilistic design methodology for MEMS - similar to the GRC developed CARES/Life program for bulk ceramics. A main objective of this effort is to further develop and verify the ability of the Ceramics Analysis and Reliability Evaluation of Structures/Life (CARES/Life) code to predict the time-dependent reliability of MEMS structures subjected to multiple transient loads. A second set of objectives is to determine the applicability/suitability of the CARES/Life methodology for CMC analysis, what changes would be needed to the methodology and software, and if feasible, run a demonstration problem. Also important is an evaluation of CARES/Life coupled to the ANSYS Probabilistic Design System (PDS) and the potential of coupling transient reliability analysis to the ANSYS PDS.

  13. Probabilistic Risk Analysis of Run-up and Inundation in Hawaii due to Distant Tsunamis

    NASA Astrophysics Data System (ADS)

    Gica, E.; Teng, M. H.; Liu, P. L.

    2004-12-01

    Risk assessment of natural hazards usually includes two aspects, namely, the probability of the natural hazard occurrence and the degree of damage caused by the natural hazard. Our current study is focused on the first aspect, i.e., the development and evaluation of a methodology that can predict the probability of coastal inundation due to distant tsunamis in the Pacific Basin. The calculation of the probability of tsunami inundation could be a simple statistical problem if a sufficiently long record of field data on inundation was available. Unfortunately, such field data are very limited in the Pacific Basin due to the reason that field measurement of inundation requires the physical presence of surveyors on site. In some areas, no field measurements were ever conducted in the past. Fortunately, there are more complete and reliable historical data on earthquakes in the Pacific Basin partly because earthquakes can be measured remotely. There are also numerical simulation models such as the Cornell COMCOT model that can predict tsunami generation by an earthquake, propagation in the open ocean, and inundation onto a coastal land. Our objective is to develop a methodology that can link the probability of earthquakes in the Pacific Basin with the inundation probability in a coastal area. The probabilistic methodology applied here involves the following steps: first, the Pacific Rim is divided into blocks of potential earthquake sources based on the past earthquake record and fault information. Then the COMCOT model is used to predict the inundation at a distant coastal area due to a tsunami generated by an earthquake of a particular magnitude in each source block. This simulation generates a response relationship between the coastal inundation and an earthquake of a particular magnitude and location. Since the earthquake statistics is known for each block, by summing the probability of all earthquakes in the Pacific Rim, the probability of the inundation in a coastal area can be determined through the response relationship. Although the idea of the statistical methodology applied here is not new, this study is the first to apply it to study the probability of inundation caused by earthquake-generated distant tsunamis in the Pacific Basin. As a case study, the methodology is applied to predict the tsunami inundation risk in Hilo Bay in Hawaii. Since relatively more field data on tsunami inundation are available for Hilo Bay, this case study can help to evaluate the applicability of the methodology for predicting tsunami inundation risk in the Pacific Basin. Detailed results will be presented at the AGU meeting.

  14. Short-Term fo F2 Forecast: Present Day State of Art

    NASA Astrophysics Data System (ADS)

    Mikhailov, A. V.; Depuev, V. H.; Depueva, A. H.

    An analysis of the F2-layer short-term forecast problem has been done. Both objective and methodological problems prevent us from a deliberate F2-layer forecast issuing at present. An empirical approach based on statistical methods may be recommended for practical use. A forecast method based on a new aeronomic index (a proxy) AI has been proposed and tested over selected 64 severe storm events. The method provides an acceptable prediction accuracy both for strongly disturbed and quiet conditions. The problems with the prediction of the F2-layer quiet-time disturbances as well as some other unsolved problems are discussed

  15. Design Optimization of Liquid Nitrogen Based IQF Tunnel

    NASA Astrophysics Data System (ADS)

    Datye, A. B.; Narayankhedkar, K. G.; Sharma, G. K.

    2006-04-01

    A design methodology for an Individual Quick Freezing (IQF) tunnel using liquid nitrogen is developed and the design based on this methodology is validated using the data of commercial tunnels. The design takes care of heat gains due to the conveyor belt which is exposed to atmosphere at the infeed and outfeed ends. The design also considers the heat gains through the insulation as well as due to circulating fans located within the tunnel. For minimum liquid nitrogen consumption, the ratio of the length of the belt, L (from infeed to out feed) to the width of the belt, W can be considered as a parameter. The comparison of predicted and reported liquid nitrogen (experimental data) consumption shows good agreement and is within 10 %.

  16. Data Sufficiency Assessment and Pumping Test Design for Groundwater Prediction Using Decision Theory and Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    McPhee, J.; William, Y. W.

    2005-12-01

    This work presents a methodology for pumping test design based on the reliability requirements of a groundwater model. Reliability requirements take into consideration the application of the model results in groundwater management, expressed in this case as a multiobjective management model. The pumping test design is formulated as a mixed-integer nonlinear programming (MINLP) problem and solved using a combination of genetic algorithm (GA) and gradient-based optimization. Bayesian decision theory provides a formal framework for assessing the influence of parameter uncertainty over the reliability of the proposed pumping test. The proposed methodology is useful for selecting a robust design that will outperform all other candidate designs under most potential 'true' states of the system

  17. Computational Fragment-Based Drug Design: Current Trends, Strategies, and Applications.

    PubMed

    Bian, Yuemin; Xie, Xiang-Qun Sean

    2018-04-09

    Fragment-based drug design (FBDD) has become an effective methodology for drug development for decades. Successful applications of this strategy brought both opportunities and challenges to the field of Pharmaceutical Science. Recent progress in the computational fragment-based drug design provide an additional approach for future research in a time- and labor-efficient manner. Combining multiple in silico methodologies, computational FBDD possesses flexibilities on fragment library selection, protein model generation, and fragments/compounds docking mode prediction. These characteristics provide computational FBDD superiority in designing novel and potential compounds for a certain target. The purpose of this review is to discuss the latest advances, ranging from commonly used strategies to novel concepts and technologies in computational fragment-based drug design. Particularly, in this review, specifications and advantages are compared between experimental and computational FBDD, and additionally, limitations and future prospective are discussed and emphasized.

  18. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  19. Life and Damage Monitoring-Using NDI Data Interpretation for Corrosion Damage and Remaining Life Assessments

    DTIC Science & Technology

    2003-02-01

    Holistic Life Prediction Methodology Engineering is a profession based in science, but in the face of limited data or resources, the application of...the process. (see Table 1). "* HLPM uses continuum mechanics but defines limits of applicability - is material and process specific. "* HLPM defines...LEFM - EPFM ?) Nucleated Structure dominated Data base** Tensile/compressive discontinuity (not crack growth buckling inherent) type, size, Appropriate

  20. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE PAGES

    Senecal, P. K.; Pomraning, E.; Anders, J. W.; ...

    2014-05-28

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  1. Predictions of Transient Flame Lift-Off Length With Comparison to Single-Cylinder Optical Engine Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Senecal, P. K.; Pomraning, E.; Anders, J. W.

    A state-of-the-art, grid-convergent simulation methodology was applied to three-dimensional calculations of a single-cylinder optical engine. A mesh resolution study on a sector-based version of the engine geometry further verified the RANS-based cell size recommendations previously presented by Senecal et al. (“Grid Convergent Spray Models for Internal Combustion Engine CFD Simulations,” ASME Paper No. ICEF2012-92043). Convergence of cylinder pressure, flame lift-off length, and emissions was achieved for an adaptive mesh refinement cell size of 0.35 mm. Furthermore, full geometry simulations, using mesh settings derived from the grid convergence study, resulted in excellent agreement with measurements of cylinder pressure, heat release rate,more » and NOx emissions. On the other hand, the full geometry simulations indicated that the flame lift-off length is not converged at 0.35 mm for jets not aligned with the computational mesh. Further simulations suggested that the flame lift-off lengths for both the nonaligned and aligned jets appear to be converged at 0.175 mm. With this increased mesh resolution, both the trends and magnitudes in flame lift-off length were well predicted with the current simulation methodology. Good agreement between the overall predicted flame behavior and the available chemiluminescence measurements was also achieved. Our present study indicates that cell size requirements for accurate prediction of full geometry flame lift-off lengths may be stricter than those for global combustion behavior. This may be important when accurate soot predictions are required.« less

  2. MiRduplexSVM: A High-Performing MiRNA-Duplex Prediction and Evaluation Methodology

    PubMed Central

    Karathanasis, Nestoras; Tsamardinos, Ioannis; Poirazi, Panayiota

    2015-01-01

    We address the problem of predicting the position of a miRNA duplex on a microRNA hairpin via the development and application of a novel SVM-based methodology. Our method combines a unique problem representation and an unbiased optimization protocol to learn from mirBase19.0 an accurate predictive model, termed MiRduplexSVM. This is the first model that provides precise information about all four ends of the miRNA duplex. We show that (a) our method outperforms four state-of-the-art tools, namely MaturePred, MiRPara, MatureBayes, MiRdup as well as a Simple Geometric Locator when applied on the same training datasets employed for each tool and evaluated on a common blind test set. (b) In all comparisons, MiRduplexSVM shows superior performance, achieving up to a 60% increase in prediction accuracy for mammalian hairpins and can generalize very well on plant hairpins, without any special optimization. (c) The tool has a number of important applications such as the ability to accurately predict the miRNA or the miRNA*, given the opposite strand of a duplex. Its performance on this task is superior to the 2nts overhang rule commonly used in computational studies and similar to that of a comparative genomic approach, without the need for prior knowledge or the complexity of performing multiple alignments. Finally, it is able to evaluate novel, potential miRNAs found either computationally or experimentally. In relation with recent confidence evaluation methods used in miRBase, MiRduplexSVM was successful in identifying high confidence potential miRNAs. PMID:25961860

  3. A response surface methodology based damage identification technique

    NASA Astrophysics Data System (ADS)

    Fang, S. E.; Perera, R.

    2009-06-01

    Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.

  4. Application of machine learning methodology for pet-based definition of lung cancer

    PubMed Central

    Kerhet, A.; Small, C.; Quon, H.; Riauka, T.; Schrader, L.; Greiner, R.; Yee, D.; McEwan, A.; Roa, W.

    2010-01-01

    We applied a learning methodology framework to assist in the threshold-based segmentation of non-small-cell lung cancer (nsclc) tumours in positron-emission tomography–computed tomography (pet–ct) imaging for use in radiotherapy planning. Gated and standard free-breathing studies of two patients were independently analysed (four studies in total). Each study had a pet–ct and a treatment-planning ct image. The reference gross tumour volume (gtv) was identified by two experienced radiation oncologists who also determined reference standardized uptake value (suv) thresholds that most closely approximated the gtv contour on each slice. A set of uptake distribution-related attributes was calculated for each pet slice. A machine learning algorithm was trained on a subset of the pet slices to cope with slice-to-slice variation in the optimal suv threshold: that is, to predict the most appropriate suv threshold from the calculated attributes for each slice. The algorithm’s performance was evaluated using the remainder of the pet slices. A high degree of geometric similarity was achieved between the areas outlined by the predicted and the reference suv thresholds (Jaccard index exceeding 0.82). No significant difference was found between the gated and the free-breathing results in the same patient. In this preliminary work, we demonstrated the potential applicability of a machine learning methodology as an auxiliary tool for radiation treatment planning in nsclc. PMID:20179802

  5. Feasibility Study on the Satellite Rainfall Data for Prediction of Sediment- Related Disaster by the Japanese Prediction Methodology

    NASA Astrophysics Data System (ADS)

    Shimizu, Y.; Ishizuka, T.; Osanai, N.; Okazumi, T.

    2014-12-01

    In this study, the sediment-related disaster prediction method which based ground gauged rainfall-data, currently practiced in Japan was coupled with satellite rainfall data and applied to domestic large-scale sediment-related disasters. The study confirmed the feasibility of this integrated method. In Asia, large-scale sediment-related disasters which can sweep away an entire settlement occur frequently. Leyte Island suffered from a huge landslide in 2004, and Typhoon Molakot in 2009 caused huge landslides in Taiwan. In the event of these sediment-related disasters, immediate responses by central and local governments are crucial in crisis management. In general, there are not enough rainfall gauge stations in developing countries. Therefore national and local governments have little information to determine the risk level of water induced disasters in their service areas. In the Japanese methodology, a criterion is set by combining two indices: the short-term rainfall index and long-term rainfall index. The short-term rainfall index is defined as the 60-minute total rainfall; the long-term rainfall index as the soil-water index, which is an estimation of the retention status of fallen rainfall in soil. In July 2009, a high-density sediment related disaster, or a debris flow, occurred in Hofu City of Yamaguchi Prefecture, in the western region of Japan. This event was calculated by the Japanese standard methodology, and then analyzed for its feasibility. Hourly satellite based rainfall has underestimates compared with ground based rainfall data. Long-term index correlates with each other. Therefore, this study confirmed that it is possible to deliver information on the risk level of sediment-related disasters such as shallow landslides and debris flows. The prediction method tested in this study is expected to assist for timely emergency responses to rainfall-induced natural disasters in sparsely gauged areas. As the Global Precipitation Measurement (GPM) Plan progresses, spatial resolution, time resolution and accuracy of rainfall data should be further improved and will be more effective in practical use.

  6. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE PAGES

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing; ...

    2017-05-15

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  7. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  8. Anticipatory Monitoring and Control of Complex Systems using a Fuzzy based Fusion of Support Vector Regressors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miltiadis Alamaniotis; Vivek Agarwal

    This paper places itself in the realm of anticipatory systems and envisions monitoring and control methods being capable of making predictions over system critical parameters. Anticipatory systems allow intelligent control of complex systems by predicting their future state. In the current work, an intelligent model aimed at implementing anticipatory monitoring and control in energy industry is presented and tested. More particularly, a set of support vector regressors (SVRs) are trained using both historical and observed data. The trained SVRs are used to predict the future value of the system based on current operational system parameter. The predicted values are thenmore » inputted to a fuzzy logic based module where the values are fused to obtain a single value, i.e., final system output prediction. The methodology is tested on real turbine degradation datasets. The outcome of the approach presented in this paper highlights the superiority over single support vector regressors. In addition, it is shown that appropriate selection of fuzzy sets and fuzzy rules plays an important role in improving system performance.« less

  9. Pressure-based high-order TVD methodology for dynamic stall control

    NASA Astrophysics Data System (ADS)

    Yang, H. Q.; Przekwas, A. J.

    1992-01-01

    The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.

  10. Estimation of the daily global solar radiation based on the Gaussian process regression methodology in the Saharan climate

    NASA Astrophysics Data System (ADS)

    Guermoui, Mawloud; Gairaa, Kacem; Rabehi, Abdelaziz; Djafer, Djelloul; Benkaciali, Said

    2018-06-01

    Accurate estimation of solar radiation is the major concern in renewable energy applications. Over the past few years, a lot of machine learning paradigms have been proposed in order to improve the estimation performances, mostly based on artificial neural networks, fuzzy logic, support vector machine and adaptive neuro-fuzzy inference system. The aim of this work is the prediction of the daily global solar radiation, received on a horizontal surface through the Gaussian process regression (GPR) methodology. A case study of Ghardaïa region (Algeria) has been used in order to validate the above methodology. In fact, several combinations have been tested; it was found that, GPR-model based on sunshine duration, minimum air temperature and relative humidity gives the best results in term of mean absolute bias error (MBE), root mean square error (RMSE), relative mean square error (rRMSE), and correlation coefficient ( r) . The obtained values of these indicators are 0.67 MJ/m2, 1.15 MJ/m2, 5.2%, and 98.42%, respectively.

  11. The Psychology of Meta-Ethics: Exploring Objectivism

    ERIC Educational Resources Information Center

    Goodwin, Geoffrey P.; Darley, John M.

    2008-01-01

    How do lay individuals think about the objectivity of their ethical beliefs? Do they regard them as factual and objective, or as more subjective and opinion-based, and what might predict such differences? In three experiments, we set out a methodology for assessing the perceived objectivity of ethical beliefs, and use it to document several novel…

  12. Early Experience and the Development of Cognitive Competence: Some Theoretical and Methodological Issues.

    ERIC Educational Resources Information Center

    Ulvund, Stein Erik

    1982-01-01

    Argues that in analyzing effects of early experience on development of cognitive competence, theoretical analyses as well as empirical investigations should be based on a transactional model of development. Shows optimal stimulation hypothesis, particularly the enhancement prediction, seems to represent a transactional approach to the study of…

  13. A Novel Approach to Rotorcraft Damage Tolerance

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Everett, Richard A.; Newman, John A.

    2002-01-01

    Damage-tolerance methodology is positioned to replace safe-life methodologies for designing rotorcraft structures. The argument for implementing a damage-tolerance method comes from the fundamental fact that rotorcraft structures typically fail by fatigue cracking. Therefore, if technology permits prediction of fatigue-crack growth in structures, a damage-tolerance method should deliver the most accurate prediction of component life. Implementing damage-tolerance (DT) into high-cycle-fatigue (HCF) components will require a shift from traditional DT methods that rely on detecting an initial flaw with nondestructive inspection (NDI) methods. The rapid accumulation of cycles in a HCF component will result in a design based on a traditional DT method that is either impractical because of frequent inspections, or because the design will be too heavy to operate efficiently. Furthermore, once a HCF component develops a detectable propagating crack, the remaining fatigue life is short, sometimes less than one flight hour, which does not leave sufficient time for inspection. Therefore, designing a HCF component will require basing the life analysis on an initial flaw that is undetectable with current NDI technology.

  14. An alternative approach based on artificial neural networks to study controlled drug release.

    PubMed

    Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C

    2004-02-01

    An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  15. Mechanistic modelling of the inhibitory effect of pH on microbial growth.

    PubMed

    Akkermans, Simen; Van Impe, Jan F

    2018-06-01

    Modelling and simulation of microbial dynamics as a function of processing, transportation and storage conditions is a useful tool to improve microbial food safety and quality. The goal of this research is to improve an existing methodology for building mechanistic predictive models based on the environmental conditions. The effect of environmental conditions on microbial dynamics is often described by combining the separate effects in a multiplicative way (gamma concept). This idea was extended further in this work by including the effects of the lag and stationary growth phases on microbial growth rate as independent gamma factors. A mechanistic description of the stationary phase as a function of pH was included, based on a novel class of models that consider product inhibition. Experimental results on Escherichia coli growth dynamics indicated that also the parameters of the product inhibition equations can be modelled with the gamma approach. This work has extended a modelling methodology, resulting in predictive models that are (i) mechanistically inspired, (ii) easily identifiable with a limited work load and (iii) easily extended to additional environmental conditions. Copyright © 2017. Published by Elsevier Ltd.

  16. Regional price targets appropriate for advanced coal extraction

    NASA Technical Reports Server (NTRS)

    Terasawa, K. L.; Whipple, D. M.

    1980-01-01

    A methodology is presented for predicting coal prices in regional markets for the target time frames 1985 and 2000 that could subsequently be used to guide the development of an advanced coal extraction system. The model constructed is a supply and demand model that focuses on underground mining since the advanced technology is expected to be developed for these reserves by the target years. Coal reserve data and the cost of operating a mine are used to obtain the minimum acceptable selling price that would induce the producer to bring the mine into production. Based on this information, market supply curves can be generated. Demand by region is calculated based on an EEA methodology that emphasizes demand by electric utilities and demand by industry. The demand and supply curves are then used to obtain the price targets. The results show a growth in the size of the markets for compliance and low sulphur coal regions. A significant rise in the real price of coal is not expected even by the year 2000. The model predicts heavy reliance on mines with thick seams, larger block size and deep overburden.

  17. Metabolic Toxicity Screening Using Electrochemiluminescence Arrays Coupled with Enzyme-DNA Biocolloid Reactors and Liquid Chromatography-Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Hvastkovs, Eli, G.; Schenkman, John B.; Rusling, James, F.

    2012-07-01

    New chemicals or drugs must be guaranteed safe before they can be marketed. Despite widespread use of bioassay panels for toxicity prediction, products that are toxic to a subset of the population often are not identified until clinical trials. This article reviews new array methodologies based on enzyme/DNA films that form and identify DNA-reactive metabolites that are indicators of potentially genotoxic species. This molecularly based methodology is designed in a rapid screening array that utilizes electrochemiluminescence (ECL) to detect metabolite-DNA reactions, as well as biocolloid reactors that provide the DNA adducts and metabolites for liquid chromatography-mass spectrometry (LC-MS) analysis. ECL arrays provide rapid toxicity screening, and the biocolloid reactor LC-MS approach provides a valuable follow-up on structure, identification, and formation rates of DNA adducts for toxicity hits from the ECL array screening. Specific examples using this strategy are discussed. Integration of high-throughput versions of these toxicity-screening methods with existing drug toxicity bioassays should allow for better human toxicity prediction as well as more informed decision making regarding new chemical and drug candidates.

  18. Ares I-X Ascent Base Environments

    NASA Technical Reports Server (NTRS)

    Mobley, B. L.; Bender, R. L.; Canabal, F.; Smith, Sheldon D.

    2011-01-01

    Plume induced base heating environments were measured during the flight of the NASA Constellation Ares I-X developmental launch vehicle, successfully flown on October 28, 2009. The Ares IX first stage is a four segment Space Shuttle derived booster with base consisting of a flared aft skirt, deceleration and tumble motors, and a thermal curtain surrounding the first stage 7.2 area ratio nozzle. Developmental Flight Instrumentation (DFI) consisted of radiometers, calorimeters, pressure transducers and gas temperature probes installed on the aft skirt and nozzle to measure the base environments. In addition, thermocouples were also installed between the layers of the flexible thermal curtain to provide insight into the curtain response to the base environments and to assist in understanding curtain failure during reentry. Plume radiation environment predictions were generated by the Reverse Monte Carlo (RMC) code and the convective base heating predictions utilized heritage MSFC empirical methods. These predictions were compared to the DFI data and results from the flight videography. Radiation predictions agreed with the flight measured data early in flight but gauge failures prevented high altitude comparisons. The convective environment comparisons demonstrated the need to improve the prediction methodology; particularly for low altitude, local plume recirculation. The convective comparisons showed relatively good agreement at altitudes greater than 50,000 feet.

  19. Predictive Inference Using Latent Variables with Covariates*

    PubMed Central

    Schofield, Lynne Steuerle; Junker, Brian; Taylor, Lowell J.; Black, Dan A.

    2014-01-01

    Plausible Values (PVs) are a standard multiple imputation tool for analysis of large education survey data that measures latent proficiency variables. When latent proficiency is the dependent variable, we reconsider the standard institutionally-generated PV methodology and find it applies with greater generality than shown previously. When latent proficiency is an independent variable, we show that the standard institutional PV methodology produces biased inference because the institutional conditioning model places restrictions on the form of the secondary analysts’ model. We offer an alternative approach that avoids these biases based on the mixed effects structural equations (MESE) model of Schofield (2008). PMID:25231627

  20. Analysis of street sweepings, Portland, Oregon

    USGS Publications Warehouse

    Miller, Timothy L.; Rinella, Joseph F.; McKenzie, Stuart W.; Parmenter, Jerry

    1977-01-01

    A brief study involving collection and analysis of street sweepings was undertaken to provide the U.S. Army Corps of Engineers with data on physical, chemical, and biological characteristics of dust and dirt accumulating on Portland streets. Most of the analyses selected were based on the pollutant loads predicted by the Storage, Treatment, Overflow, and Runoff Model (STORM). Five different basins were selected for sampling, and samples were collected three times in each basin. Because the literature reports no methodology for analysis of dust and dirt, the analytical methodology is described in detail. Results of the analyses are summarized in table 1.

  1. Technology CAD for integrated circuit fabrication technology development and technology transfer

    NASA Astrophysics Data System (ADS)

    Saha, Samar

    2003-07-01

    In this paper systematic simulation-based methodologies for integrated circuit (IC) manufacturing technology development and technology transfer are presented. In technology development, technology computer-aided design (TCAD) tools are used to optimize the device and process parameters to develop a new generation of IC manufacturing technology by reverse engineering from the target product specifications. While in technology transfer to manufacturing co-location, TCAD is used for process centering with respect to high-volume manufacturing equipment of the target manufacturing equipment of the target manufacturing facility. A quantitative model is developed to demonstrate the potential benefits of the simulation-based methodology in reducing the cycle time and cost of typical technology development and technology transfer projects over the traditional practices. The strategy for predictive simulation to improve the effectiveness of a TCAD-based project, is also discussed.

  2. Adaptation of Mesoscale Weather Models to Local Forecasting

    NASA Technical Reports Server (NTRS)

    Manobianco, John T.; Taylor, Gregory E.; Case, Jonathan L.; Dianic, Allan V.; Wheeler, Mark W.; Zack, John W.; Nutter, Paul A.

    2003-01-01

    Methodologies have been developed for (1) configuring mesoscale numerical weather-prediction models for execution on high-performance computer workstations to make short-range weather forecasts for the vicinity of the Kennedy Space Center (KSC) and the Cape Canaveral Air Force Station (CCAFS) and (2) evaluating the performances of the models as configured. These methodologies have been implemented as part of a continuing effort to improve weather forecasting in support of operations of the U.S. space program. The models, methodologies, and results of the evaluations also have potential value for commercial users who could benefit from tailoring their operations and/or marketing strategies based on accurate predictions of local weather. More specifically, the purpose of developing the methodologies for configuring the models to run on computers at KSC and CCAFS is to provide accurate forecasts of winds, temperature, and such specific thunderstorm-related phenomena as lightning and precipitation. The purpose of developing the evaluation methodologies is to maximize the utility of the models by providing users with assessments of the capabilities and limitations of the models. The models used in this effort thus far include the Mesoscale Atmospheric Simulation System (MASS), the Regional Atmospheric Modeling System (RAMS), and the National Centers for Environmental Prediction Eta Model ( Eta for short). The configuration of the MASS and RAMS is designed to run the models at very high spatial resolution and incorporate local data to resolve fine-scale weather features. Model preprocessors were modified to incorporate surface, ship, buoy, and rawinsonde data as well as data from local wind towers, wind profilers, and conventional or Doppler radars. The overall evaluation of the MASS, Eta, and RAMS was designed to assess the utility of these mesoscale models for satisfying the weather-forecasting needs of the U.S. space program. The evaluation methodology includes objective and subjective verification methodologies. Objective (e.g., statistical) verification of point forecasts is a stringent measure of model performance, but when used alone, it is not usually sufficient for quantifying the value of the overall contribution of the model to the weather-forecasting process. This is especially true for mesoscale models with enhanced spatial and temporal resolution that may be capable of predicting meteorologically consistent, though not necessarily accurate, fine-scale weather phenomena. Therefore, subjective (phenomenological) evaluation, focusing on selected case studies and specific weather features, such as sea breezes and precipitation, has been performed to help quantify the added value that cannot be inferred solely from objective evaluation.

  3. Comprehensible knowledge model creation for cancer treatment decision making.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali Khan, Wajahat; Ali, Taqdir; Lee, Sungyoung; Huh, Eui-Nam; Farooq Ahmad, Hafiz; Jamshed, Arif; Iqbal, Hassan; Irfan, Muhammad; Abbas Hydari, Manzar

    2017-03-01

    A wealth of clinical data exists in clinical documents in the form of electronic health records (EHRs). This data can be used for developing knowledge-based recommendation systems that can assist clinicians in clinical decision making and education. One of the big hurdles in developing such systems is the lack of automated mechanisms for knowledge acquisition to enable and educate clinicians in informed decision making. An automated knowledge acquisition methodology with a comprehensible knowledge model for cancer treatment (CKM-CT) is proposed. With the CKM-CT, clinical data are acquired automatically from documents. Quality of data is ensured by correcting errors and transforming various formats into a standard data format. Data preprocessing involves dimensionality reduction and missing value imputation. Predictive algorithm selection is performed on the basis of the ranking score of the weighted sum model. The knowledge builder prepares knowledge for knowledge-based services: clinical decisions and education support. Data is acquired from 13,788 head and neck cancer (HNC) documents for 3447 patients, including 1526 patients of the oral cavity site. In the data quality task, 160 staging values are corrected. In the preprocessing task, 20 attributes and 106 records are eliminated from the dataset. The Classification and Regression Trees (CRT) algorithm is selected and provides 69.0% classification accuracy in predicting HNC treatment plans, consisting of 11 decision paths that yield 11 decision rules. Our proposed methodology, CKM-CT, is helpful to find hidden knowledge in clinical documents. In CKM-CT, the prediction models are developed to assist and educate clinicians for informed decision making. The proposed methodology is generalizable to apply to data of other domains such as breast cancer with a similar objective to assist clinicians in decision making and education. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Moisture content prediction in poultry litter using artificial intelligence techniques and Monte Carlo simulation to determine the economic yield from energy use.

    PubMed

    Rico-Contreras, José Octavio; Aguilar-Lasserre, Alberto Alfonso; Méndez-Contreras, Juan Manuel; López-Andrés, Jhony Josué; Cid-Chama, Gabriela

    2017-11-01

    The objective of this study is to determine the economic return of poultry litter combustion in boilers to produce bioenergy (thermal and electrical), as this biomass has a high-energy potential due to its component elements, using fuzzy logic to predict moisture and identify the high-impact variables. This is carried out using a proposed 7-stage methodology, which includes a statistical analysis of agricultural systems and practices to identify activities contributing to moisture in poultry litter (for example, broiler chicken management, number of air extractors, and avian population density), and thereby reduce moisture to increase the yield of the combustion process. Estimates of poultry litter production and heating value are made based on 4 different moisture content percentages (scenarios of 25%, 30%, 35%, and 40%), and then a risk analysis is proposed using the Monte Carlo simulation to select the best investment alternative and to estimate the environmental impact for greenhouse gas mitigation. The results show that dry poultry litter (25%) is slightly better for combustion, generating 3.20% more energy. Reducing moisture from 40% to 25% involves considerable economic investment due to the purchase of equipment to reduce moisture; thus, when calculating financial indicators, the 40% scenario is the most attractive, as it is the current scenario. Thus, this methodology proposes a technology approach based on the use of advanced tools to predict moisture and representation of the system (Monte Carlo simulation), where the variability and uncertainty of the system are accurately represented. Therefore, this methodology is considered generic for any bioenergy generation system and not just for the poultry sector, whether it uses combustion or another type of technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Development of a Patient-Based Model for Estimating Operative Times for Robot-Assisted Radical Prostatectomy.

    PubMed

    Huben, Neil; Hussein, Ahmed; May, Paul; Whittum, Michelle; Kraswowki, Collin; Ahmed, Youssef; Jing, Zhe; Khan, Hijab; Kim, Hyung; Schwaab, Thomas; Underwood Iii, Willie; Kauffman, Eric; Mohler, James L; Guru, Khurshid A

    2018-04-10

    To develop a methodology for predicting operative times for robot-assisted radical prostatectomy (RARP) using preoperative patient, disease, procedural and surgeon variables to facilitate operating room (OR) scheduling. The model included preoperative metrics: BMI, ASA score, clinical stage, National Comprehensive Cancer Network (NCCN) risk, prostate weight, nerve-sparing status, extent and laterality of lymph node dissection, and operating surgeon (6 surgeons were included in the study). A binary decision tree was fit using a conditional inference tree method to predict operative times. The variables most associated with operative time were determined using permutation tests. The data was split at the value of the variable that results in the largest difference in means for surgical time across the split. This process was repeated recursively on the resultant data. 1709 RARPs were included. The variable most strongly associated with operative time was the surgeon (surgeons 2 and 4 - 102 minutes shorter than surgeons 1, 3, 5, and 6, p<0.001). Among surgeons 2 and 4, BMI had the strongest association with surgical time (p<0.001). Among patients operated by surgeons 1, 3, 5 and 6, RARP time was again most strongly associated with the surgeon performing RARP. Surgeons 1, 3, and 6 were on average 76 minutes faster than surgeon 5 (p<0.001). The regression tree output in the form of box plots showed operative time median and ranges according to patient, disease, procedural and surgeon metrics. We developed a methodology that can predict operative times for RARP based on patient, disease and surgeon variables. This methodology can be utilized for quality control, facilitate OR scheduling and maximize OR efficiency.

  6. Bayesian averaging over Decision Tree models for trauma severity scoring.

    PubMed

    Schetinin, V; Jakaite, L; Krzanowski, W

    2018-01-01

    Health care practitioners analyse possible risks of misleading decisions and need to estimate and quantify uncertainty in predictions. We have examined the "gold" standard of screening a patient's conditions for predicting survival probability, based on logistic regression modelling, which is used in trauma care for clinical purposes and quality audit. This methodology is based on theoretical assumptions about data and uncertainties. Models induced within such an approach have exposed a number of problems, providing unexplained fluctuation of predicted survival and low accuracy of estimating uncertainty intervals within which predictions are made. Bayesian method, which in theory is capable of providing accurate predictions and uncertainty estimates, has been adopted in our study using Decision Tree models. Our approach has been tested on a large set of patients registered in the US National Trauma Data Bank and has outperformed the standard method in terms of prediction accuracy, thereby providing practitioners with accurate estimates of the predictive posterior densities of interest that are required for making risk-aware decisions. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Analysis of wind tunnel test results for a 9.39-per cent scale model of a VSTOL fighter/attack aircraft. Volume 2: Evaluation of prediction methodologies

    NASA Technical Reports Server (NTRS)

    Lummus, J. R.; Joyce, G. T.; Omalley, C. D.

    1980-01-01

    An evaluation of current prediction methodologies to estimate the aerodynamic uncertainties identified for the E205 configuration is presented. This evaluation was accomplished by comparing predicted and wind tunnel test data in three major categories: untrimmed longitudinal aerodynamics; trimmed longitudinal aerodynamics; and lateral-directional aerodynamic characteristics.

  8. Variable Density Multilayer Insulation for Cryogenic Storage

    NASA Technical Reports Server (NTRS)

    Hedayat, A.; Brown, T. M.; Hastings, L. J.; Martin, J.

    2000-01-01

    Two analytical models for a foam/Variable Density Multi-Layer Insulation (VD-MLI) system performance are discussed. Both models are one-dimensional and contain three heat transfer mechanisms, namely conduction through the spacer material, radiation between the shields, and conduction through the gas. One model is based on the methodology developed by McIntosh while the other model is based on the Lockheed semi-empirical approach. All models input variables are based on the Multi-purpose Hydrogen Test Bed (MHTB) geometry and available values for material properties and empirical solid conduction coefficient. Heat flux predictions are in good agreement with the MHTB data, The heat flux predictions are presented for the foam/MLI combinations with 30, 45, 60, and 75 MLI layers

  9. Prediction of Regulation Reserve Requirements in California ISO Control Area based on BAAL Standard

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Etingov, Pavel V.; Makarov, Yuri V.; Samaan, Nader A.

    This paper presents new methodologies developed at Pacific Northwest National Laboratory (PNNL) to estimate regulation capacity requirements in the California ISO control area. Two approaches have been developed: (1) an approach based on statistical analysis of actual historical area control error (ACE) and regulation data, and (2) an approach based on balancing authority ACE limit control performance standard. The approaches predict regulation reserve requirements on a day-ahead basis including upward and downward requirements, for each operating hour of a day. California ISO data has been used to test the performance of the proposed algorithms. Results show that software tool allowsmore » saving up to 30% on the regulation procurements cost .« less

  10. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  11. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  12. Prediction of ground water quality index to assess suitability for drinking purposes using fuzzy rule-based approach

    NASA Astrophysics Data System (ADS)

    Gorai, A. K.; Hasni, S. A.; Iqbal, Jawed

    2016-11-01

    Groundwater is the most important natural resource for drinking water to many people around the world, especially in rural areas where the supply of treated water is not available. Drinking water resources cannot be optimally used and sustained unless the quality of water is properly assessed. To this end, an attempt has been made to develop a suitable methodology for the assessment of drinking water quality on the basis of 11 physico-chemical parameters. The present study aims to select the fuzzy aggregation approach for estimation of the water quality index of a sample to check the suitability for drinking purposes. Based on expert's opinion and author's judgement, 11 water quality (pollutant) variables (Alkalinity, Dissolved Solids (DS), Hardness, pH, Ca, Mg, Fe, Fluoride, As, Sulphate, Nitrates) are selected for the quality assessment. The output results of proposed methodology are compared with the output obtained from widely used deterministic method (weighted arithmetic mean aggregation) for the suitability of the developed methodology.

  13. Breast cancer statistics and prediction methodology: a systematic review and analysis.

    PubMed

    Dubey, Ashutosh Kumar; Gupta, Umesh; Jain, Sonal

    2015-01-01

    Breast cancer is a menacing cancer, primarily affecting women. Continuous research is going on for detecting breast cancer in the early stage as the possibility of cure in early stages is bright. There are two main objectives of this current study, first establish statistics for breast cancer and second to find methodologies which can be helpful in the early stage detection of the breast cancer based on previous studies. The breast cancer statistics for incidence and mortality of the UK, US, India and Egypt were considered for this study. The finding of this study proved that the overall mortality rates of the UK and US have been improved because of awareness, improved medical technology and screening, but in case of India and Egypt the condition is less positive because of lack of awareness. The methodological findings of this study suggest a combined framework based on data mining and evolutionary algorithms. It provides a strong bridge in improving the classification and detection accuracy of breast cancer data.

  14. Statistical optimization of process parameters for lipase-catalyzed synthesis of triethanolamine-based esterquats using response surface methodology in 2-liter bioreactor.

    PubMed

    Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat

    2013-01-01

    Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.

  15. The impact of functional analysis methodology on treatment choice for self-injurious and aggressive behavior.

    PubMed Central

    Pelios, L; Morren, J; Tesch, D; Axelrod, S

    1999-01-01

    Self-injurious behavior (SIB) and aggression have been the concern of researchers because of the serious impact these behaviors have on individuals' lives. Despite the plethora of research on the treatment of SIB and aggressive behavior, the reported findings have been inconsistent regarding the effectiveness of reinforcement-based versus punishment-based procedures. We conducted a literature review to determine whether a trend could be detected in researchers' selection of reinforcement-based procedures versus punishment-based procedures, particularly since the introduction of functional analysis to behavioral assessment. The data are consistent with predictions made in the past regarding the potential impact of functional analysis methodology. Specifically, the findings indicate that, once maintaining variables for problem behavior are identified, experimenters tend to choose reinforcement-based procedures rather than punishment-based procedures as treatment for both SIB and aggressive behavior. Results indicated an increased interest in studies on the treatment of SIB and aggressive behavior, particularly since 1988. PMID:10396771

  16. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive bias of a simplified model can be detected and corrected, and postcalibration predictive uncertainty can be quantified. The methodology is demonstrated using a synthetic example based on groundwater modeling environments commonly encountered in northern Europe and North America.

  17. Combination and selection of traffic safety expert judgments for the prevention of driving risks.

    PubMed

    Cabello, Enrique; Conde, Cristina; de Diego, Isaac Martín; Moguerza, Javier M; Redchuk, Andrés

    2012-11-02

    In this paper, we describe a new framework to combine experts’ judgments for the prevention of driving risks in a cabin truck. In addition, the methodology shows how to choose among the experts the one whose predictions fit best the environmental conditions. The methodology is applied over data sets obtained from a high immersive cabin truck simulator in natural driving conditions. A nonparametric model, based in Nearest Neighbors combined with Restricted Least Squared methods is developed. Three experts were asked to evaluate the driving risk using a Visual Analog Scale (VAS), in order to measure the driving risk in a truck simulator where the vehicle dynamics factors were stored. Numerical results show that the methodology is suitable for embedding in real time systems.

  18. Symbolic Processing Combined with Model-Based Reasoning

    NASA Technical Reports Server (NTRS)

    James, Mark

    2009-01-01

    A computer program for the detection of present and prediction of future discrete states of a complex, real-time engineering system utilizes a combination of symbolic processing and numerical model-based reasoning. One of the biggest weaknesses of a purely symbolic approach is that it enables prediction of only future discrete states while missing all unmodeled states or leading to incorrect identification of an unmodeled state as a modeled one. A purely numerical approach is based on a combination of statistical methods and mathematical models of the applicable physics and necessitates development of a complete model to the level of fidelity required for prediction. In addition, a purely numerical approach does not afford the ability to qualify its results without some form of symbolic processing. The present software implements numerical algorithms to detect unmodeled events and symbolic algorithms to predict expected behavior, correlate the expected behavior with the unmodeled events, and interpret the results in order to predict future discrete states. The approach embodied in this software differs from that of the BEAM methodology (aspects of which have been discussed in several prior NASA Tech Briefs articles), which provides for prediction of future measurements in the continuous-data domain.

  19. Extending data worth methods to select multiple observations targeting specific hydrological predictions of interest

    NASA Astrophysics Data System (ADS)

    Vilhelmsen, Troels N.; Ferré, Ty P. A.

    2016-04-01

    Hydrological models are often developed to forecasting future behavior in response due to natural or human induced changes in stresses affecting hydrologic systems. Commonly, these models are conceptualized and calibrated based on existing data/information about the hydrological conditions. However, most hydrologic systems lack sufficient data to constrain models with adequate certainty to support robust decision making. Therefore, a key element of a hydrologic study is the selection of additional data to improve model performance. Given the nature of hydrologic investigations, it is not practical to select data sequentially, i.e. to choose the next observation, collect it, refine the model, and then repeat the process. Rather, for timing and financial reasons, measurement campaigns include multiple wells or sampling points. There is a growing body of literature aimed at defining the expected data worth based on existing models. However, these are almost all limited to identifying single additional observations. In this study, we present a methodology for simultaneously selecting multiple potential new observations based on their expected ability to reduce the uncertainty of the forecasts of interest. This methodology is based on linear estimates of the predictive uncertainty, and it can be used to determine the optimal combinations of measurements (location and number) established to reduce the uncertainty of multiple predictions. The outcome of the analysis is an estimate of the optimal sampling locations; the optimal number of samples; as well as a probability map showing the locations within the investigated area that are most likely to provide useful information about the forecasting of interest.

  20. Machine Learning–Based Differential Network Analysis: A Study of Stress-Responsive Transcriptomes in Arabidopsis[W

    PubMed Central

    Ma, Chuang; Xin, Mingming; Feldmann, Kenneth A.; Wang, Xiangfeng

    2014-01-01

    Machine learning (ML) is an intelligent data mining technique that builds a prediction model based on the learning of prior knowledge to recognize patterns in large-scale data sets. We present an ML-based methodology for transcriptome analysis via comparison of gene coexpression networks, implemented as an R package called machine learning–based differential network analysis (mlDNA) and apply this method to reanalyze a set of abiotic stress expression data in Arabidopsis thaliana. The mlDNA first used a ML-based filtering process to remove nonexpressed, constitutively expressed, or non-stress-responsive “noninformative” genes prior to network construction, through learning the patterns of 32 expression characteristics of known stress-related genes. The retained “informative” genes were subsequently analyzed by ML-based network comparison to predict candidate stress-related genes showing expression and network differences between control and stress networks, based on 33 network topological characteristics. Comparative evaluation of the network-centric and gene-centric analytic methods showed that mlDNA substantially outperformed traditional statistical testing–based differential expression analysis at identifying stress-related genes, with markedly improved prediction accuracy. To experimentally validate the mlDNA predictions, we selected 89 candidates out of the 1784 predicted salt stress–related genes with available SALK T-DNA mutagenesis lines for phenotypic screening and identified two previously unreported genes, mutants of which showed salt-sensitive phenotypes. PMID:24520154

  1. Prediction of Nucleotide Binding Peptides Using Star Graph Topological Indices.

    PubMed

    Liu, Yong; Munteanu, Cristian R; Fernández Blanco, Enrique; Tan, Zhiliang; Santos Del Riego, Antonino; Pazos, Alejandro

    2015-11-01

    The nucleotide binding proteins are involved in many important cellular processes, such as transmission of genetic information or energy transfer and storage. Therefore, the screening of new peptides for this biological function is an important research topic. The current study proposes a mixed methodology to obtain the first classification model that is able to predict new nucleotide binding peptides, using only the amino acid sequence. Thus, the methodology uses a Star graph molecular descriptor of the peptide sequences and the Machine Learning technique for the best classifier. The best model represents a Random Forest classifier based on two features of the embedded and non-embedded graphs. The performance of the model is excellent, considering similar models in the field, with an Area Under the Receiver Operating Characteristic Curve (AUROC) value of 0.938 and true positive rate (TPR) of 0.886 (test subset). The prediction of new nucleotide binding peptides with this model could be useful for drug target studies in drug development. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Economic optimization of operations for hybrid energy systems under variable markets

    DOE PAGES

    Chen, Jen; Garcia, Humberto E.

    2016-05-21

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  3. Subject specific finite element modeling of periprosthetic femoral fracture using element deactivation to simulate bone failure.

    PubMed

    Miles, Brad; Kolos, Elizabeth; Walter, William L; Appleyard, Richard; Shi, Angela; Li, Qing; Ruys, Andrew J

    2015-06-01

    Subject-specific finite element (FE) modeling methodology could predict peri-prosthetic femoral fracture (PFF) for cementless hip arthoplasty in the early postoperative period. This study develops methodology for subject-specific finite element modeling by using the element deactivation technique to simulate bone failure and validate with experimental testing, thereby predicting peri-prosthetic femoral fracture in the early postoperative period. Material assignments for biphasic and triphasic models were undertaken. Failure modeling with the element deactivation feature available in ABAQUS 6.9 was used to simulate a crack initiation and propagation in the bony tissue based upon a threshold of fracture strain. The crack mode for the biphasic models was very similar to the experimental testing crack mode, with a similar shape and path of the crack. The fracture load is sensitive to the friction coefficient at the implant-bony interface. The development of a novel technique to simulate bone failure by element deactivation of subject-specific finite element models could aid prediction of fracture load in addition to fracture risk characterization for PFF. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Economic optimization of operations for hybrid energy systems under variable markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Jen; Garcia, Humberto E.

    We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less

  5. An improved approach for flight readiness certification: Probabilistic models for flaw propagation and turbine blade failure. Volume 2: Software documentation

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflights systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with analytical modeling of failure phenomena to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in analytical modeling, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which analytical models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. State-of-the-art analytical models currently employed for design, failure prediction, or performance analysis are used in this methodology. The rationale for the statistical approach taken in the PFA methodology is discussed, the PFA methodology is described, and examples of its application to structural failure modes are presented. The engineering models and computer software used in fatigue crack growth and fatigue crack initiation applications are thoroughly documented.

  6. Hydrocode predictions of collisional outcomes: Effects of target size

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.

    1991-01-01

    Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.

  7. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  8. SMART empirical approaches for predicting field performance of PV modules from results of reliability tests

    NASA Astrophysics Data System (ADS)

    Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata

    2016-09-01

    Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.

  9. Optimization of β-cyclodextrin-based flavonol extraction from apple pomace using response surface methodology.

    PubMed

    Parmar, Indu; Sharma, Sowmya; Rupasinghe, H P Vasantha

    2015-04-01

    The present study investigated five cyclodextrins (CDs) for the extraction of flavonols from apple pomace powder and optimized β-CD based extraction of total flavonols using response surface methodology. A 2(3) central composite design with β-CD concentration (0-5 g 100 mL(-1)), extraction temperature (20-72 °C), extraction time (6-48 h) and second-order quadratic model for the total flavonol yield (mg 100 g(-1) DM) was selected to generate the response surface curves. The optimal conditions obtained were: β-CD concentration, 2.8 g 100 mL(-1); extraction temperature, 45 °C and extraction time, 25.6 h that predicted the extraction of 166.6 mg total flavonols 100 g(-1) DM. The predicted amount was comparable to the experimental amount of 151.5 mg total flavonols 100 g(-1) DM obtained from optimal β-CD based parameters, thereby giving a low absolute error and adequacy of fitted model. In addition, the results from optimized extraction conditions showed values similar to those obtained through previously established solvent based sonication assisted flavonol extraction procedure. To the best of our knowledge, this is the first study to optimize aqueous β-CD based flavonol extraction which presents an environmentally safe method for value-addition to under-utilized bio resources.

  10. Risk assessment of groundwater level variability using variable Kriging methods

    NASA Astrophysics Data System (ADS)

    Spanoudaki, Katerina; Kampanis, Nikolaos A.

    2015-04-01

    Assessment of the water table level spatial variability in aquifers provides useful information regarding optimal groundwater management. This information becomes more important in basins where the water table level has fallen significantly. The spatial variability of the water table level in this work is estimated based on hydraulic head measured during the wet period of the hydrological year 2007-2008, in a sparsely monitored basin in Crete, Greece, which is of high socioeconomic and agricultural interest. Three Kriging-based methodologies are elaborated in Matlab environment to estimate the spatial variability of the water table level in the basin. The first methodology is based on the Ordinary Kriging approach, the second involves auxiliary information from a Digital Elevation Model in terms of Residual Kriging and the third methodology calculates the probability of the groundwater level to fall below a predefined minimum value that could cause significant problems in groundwater resources availability, by means of Indicator Kriging. The Box-Cox methodology is applied to normalize both the data and the residuals for improved prediction results. In addition, various classical variogram models are applied to determine the spatial dependence of the measurements. The Matérn model proves to be the optimal, which in combination with Kriging methodologies provides the most accurate cross validation estimations. Groundwater level and probability maps are constructed to examine the spatial variability of the groundwater level in the basin and the associated risk that certain locations exhibit regarding a predefined minimum value that has been set for the sustainability of the basin's groundwater resources. Acknowledgement The work presented in this paper has been funded by the Greek State Scholarships Foundation (IKY), Fellowships of Excellence for Postdoctoral Studies (Siemens Program), 'A simulation-optimization model for assessing the best practices for the protection of surface water and groundwater in the coastal zone', (2013 - 2015). Varouchakis, E. A. and D. T. Hristopulos (2013). "Improvement of groundwater level prediction in sparsely gauged basins using physical laws and local geographic features as auxiliary variables." Advances in Water Resources 52: 34-49. Kitanidis, P. K. (1997). Introduction to geostatistics, Cambridge: University Press.

  11. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback

    PubMed Central

    Cisler, Josh M.; Bush, Keith; James, G. Andrew; Smitherman, Sonet; Kilts, Clinton D.

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD. PMID:26241958

  12. Decoding the Traumatic Memory among Women with PTSD: Implications for Neurocircuitry Models of PTSD and Real-Time fMRI Neurofeedback.

    PubMed

    Cisler, Josh M; Bush, Keith; James, G Andrew; Smitherman, Sonet; Kilts, Clinton D

    2015-01-01

    Posttraumatic Stress Disorder (PTSD) is characterized by intrusive recall of the traumatic memory. While numerous studies have investigated the neural processing mechanisms engaged during trauma memory recall in PTSD, these analyses have only focused on group-level contrasts that reveal little about the predictive validity of the identified brain regions. By contrast, a multivariate pattern analysis (MVPA) approach towards identifying the neural mechanisms engaged during trauma memory recall would entail testing whether a multivariate set of brain regions is reliably predictive of (i.e., discriminates) whether an individual is engaging in trauma or non-trauma memory recall. Here, we use a MVPA approach to test 1) whether trauma memory vs neutral memory recall can be predicted reliably using a multivariate set of brain regions among women with PTSD related to assaultive violence exposure (N=16), 2) the methodological parameters (e.g., spatial smoothing, number of memory recall repetitions, etc.) that optimize classification accuracy and reproducibility of the feature weight spatial maps, and 3) the correspondence between brain regions that discriminate trauma memory recall and the brain regions predicted by neurocircuitry models of PTSD. Cross-validation classification accuracy was significantly above chance for all methodological permutations tested; mean accuracy across participants was 76% for the methodological parameters selected as optimal for both efficiency and accuracy. Classification accuracy was significantly better for a voxel-wise approach relative to voxels within restricted regions-of-interest (ROIs); classification accuracy did not differ when using PTSD-related ROIs compared to randomly generated ROIs. ROI-based analyses suggested the reliable involvement of the left hippocampus in discriminating memory recall across participants and that the contribution of the left amygdala to the decision function was dependent upon PTSD symptom severity. These results have methodological implications for real-time fMRI neurofeedback of the trauma memory in PTSD and conceptual implications for neurocircuitry models of PTSD that attempt to explain core neural processing mechanisms mediating PTSD.

  13. A STATISTICAL MODELING METHODOLOGY FOR THE DETECTION, QUANTIFICATION, AND PREDICTION OF ECOLOGICAL THRESHOLDS

    EPA Science Inventory

    This study will provide a general methodology for integrating threshold information from multiple species ecological metrics, allow for prediction of changes of alternative stable states, and provide a risk assessment tool that can be applied to adaptive management. The integr...

  14. Using kinematic analysis of movement to predict the time occurrence of an evoked potential associated with a motor command.

    PubMed

    O'Reilly, Christian; Plamondon, Réjean; Landou, Mohamed K; Stemmer, Brigitte

    2013-01-01

    This article presents an exploratory study investigating the possibility of predicting the time occurrence of a motor event related potential (ERP) from a kinematic analysis of human movements. Although the response-locked motor potential may link the ERP components to the recorded response, to our knowledge no previous attempt has been made to predict a priori (i.e. before any contact with the electroencephalographic data) the time occurrence of an ERP component based only on the modeling of an overt response. The proposed analysis relies on the delta-lognormal modeling of velocity, as proposed by the kinematic theory of rapid human movement used in several studies of motor control. Although some methodological aspects of this technique still need to be fine-tuned, the initial results showed that the model-based kinematic analysis allowed the prediction of the time occurrence of a motor command ERP in most participants in the experiment. The average map of the motor command ERPs showed that this signal was stronger in electrodes close to the contra-lateral motor area (Fz, FCz, FC1, and FC3). These results seem to support the claims made by the kinematic theory that a motor command is emitted at time t(0), the time reference parameter of the model. This article proposes a new time marker directly associated with a cerebral event (i.e. the emission of a motor command) that can be used for the development of new data analysis methodologies and for the elaboration of new experimental protocols based on ERP. © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  15. Thermomechanical Modeling of Sintered Silver - A Fracture Mechanics-based Approach: Extended Abstract: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paret, Paul P; DeVoto, Douglas J; Narumanchi, Sreekant V

    Sintered silver has proven to be a promising candidate for use as a die-attach and substrate-attach material in automotive power electronics components. It holds promise of greater reliability than lead-based and lead-free solders, especially at higher temperatures (less than 200 degrees Celcius). Accurate predictive lifetime models of sintered silver need to be developed and its failure mechanisms thoroughly characterized before it can be deployed as a die-attach or substrate-attach material in wide-bandgap device-based packages. We present a finite element method (FEM) modeling methodology that can offer greater accuracy in predicting the failure of sintered silver under accelerated thermal cycling. Amore » fracture mechanics-based approach is adopted in the FEM model, and J-integral/thermal cycle values are computed. In this paper, we outline the procedures for obtaining the J-integral/thermal cycle values in a computational model and report on the possible advantage of using these values as modeling parameters in a predictive lifetime model.« less

  16. Comparison of numerical weather prediction based deterministic and probabilistic wind resource assessment methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jie; Draxl, Caroline; Hopson, Thomas

    Numerical weather prediction (NWP) models have been widely used for wind resource assessment. Model runs with higher spatial resolution are generally more accurate, yet extremely computational expensive. An alternative approach is to use data generated by a low resolution NWP model, in conjunction with statistical methods. In order to analyze the accuracy and computational efficiency of different types of NWP-based wind resource assessment methods, this paper performs a comparison of three deterministic and probabilistic NWP-based wind resource assessment methodologies: (i) a coarse resolution (0.5 degrees x 0.67 degrees) global reanalysis data set, the Modern-Era Retrospective Analysis for Research and Applicationsmore » (MERRA); (ii) an analog ensemble methodology based on the MERRA, which provides both deterministic and probabilistic predictions; and (iii) a fine resolution (2-km) NWP data set, the Wind Integration National Dataset (WIND) Toolkit, based on the Weather Research and Forecasting model. Results show that: (i) as expected, the analog ensemble and WIND Toolkit perform significantly better than MERRA confirming their ability to downscale coarse estimates; (ii) the analog ensemble provides the best estimate of the multi-year wind distribution at seven of the nine sites, while the WIND Toolkit is the best at one site; (iii) the WIND Toolkit is more accurate in estimating the distribution of hourly wind speed differences, which characterizes the wind variability, at five of the available sites, with the analog ensemble being best at the remaining four locations; and (iv) the analog ensemble computational cost is negligible, whereas the WIND Toolkit requires large computational resources. Future efforts could focus on the combination of the analog ensemble with intermediate resolution (e.g., 10-15 km) NWP estimates, to considerably reduce the computational burden, while providing accurate deterministic estimates and reliable probabilistic assessments.« less

  17. Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising.

    PubMed

    Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery

    2017-01-01

    The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube.

  18. Consumer Neuroscience-Based Metrics Predict Recall, Liking and Viewing Rates in Online Advertising

    PubMed Central

    Guixeres, Jaime; Bigné, Enrique; Ausín Azofra, Jose M.; Alcañiz Raya, Mariano; Colomer Granero, Adrián; Fuentes Hurtado, Félix; Naranjo Ornedo, Valery

    2017-01-01

    The purpose of the present study is to investigate whether the effectiveness of a new ad on digital channels (YouTube) can be predicted by using neural networks and neuroscience-based metrics (brain response, heart rate variability and eye tracking). Neurophysiological records from 35 participants were exposed to 8 relevant TV Super Bowl commercials. Correlations between neurophysiological-based metrics, ad recall, ad liking, the ACE metrix score and the number of views on YouTube during a year were investigated. Our findings suggest a significant correlation between neuroscience metrics and self-reported of ad effectiveness and the direct number of views on the YouTube channel. In addition, and using an artificial neural network based on neuroscience metrics, the model classifies (82.9% of average accuracy) and estimate the number of online views (mean error of 0.199). The results highlight the validity of neuromarketing-based techniques for predicting the success of advertising responses. Practitioners can consider the proposed methodology at the design stages of advertising content, thus enhancing advertising effectiveness. The study pioneers the use of neurophysiological methods in predicting advertising success in a digital context. This is the first article that has examined whether these measures could actually be used for predicting views for advertising on YouTube. PMID:29163251

  19. Energy index decomposition methodology at the plant level

    NASA Astrophysics Data System (ADS)

    Kumphai, Wisit

    Scope and method of study. The dissertation explores the use of a high level energy intensity index as a facility-level energy performance monitoring indicator with a goal of developing a methodology for an economically based energy performance monitoring system that incorporates production information. The performance measure closely monitors energy usage, production quantity, and product mix and determines the production efficiency as a part of an ongoing process that would enable facility managers to keep track of and, in the future, be able to predict when to perform a recommissioning process. The study focuses on the use of the index decomposition methodology and explored several high level (industry, sector, and country levels) energy utilization indexes, namely, Additive Log Mean Divisia, Multiplicative Log Mean Divisia, and Additive Refined Laspeyres. One level of index decomposition is performed. The indexes are decomposed into Intensity and Product mix effects. These indexes are tested on a flow shop brick manufacturing plant model in three different climates in the United States. The indexes obtained are analyzed by fitting an ARIMA model and testing for dependency between the two decomposed indexes. Findings and conclusions. The results concluded that the Additive Refined Laspeyres index decomposition methodology is suitable to use on a flow shop, non air conditioned production environment as an energy performance monitoring indicator. It is likely that this research can be further expanded in to predicting when to perform a recommissioning process.

  20. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    NASA Astrophysics Data System (ADS)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well-grounded theoretical foundation for a new modeling framework of structural integrity. This paper presents the first findings of the research project.

  1. Statistical and engineering methods for model enhancement

    NASA Astrophysics Data System (ADS)

    Chang, Chia-Jung

    Models which describe the performance of physical process are essential for quality prediction, experimental planning, process control and optimization. Engineering models developed based on the underlying physics/mechanics of the process such as analytic models or finite element models are widely used to capture the deterministic trend of the process. However, there usually exists stochastic randomness in the system which may introduce the discrepancy between physics-based model predictions and observations in reality. Alternatively, statistical models can be used to develop models to obtain predictions purely based on the data generated from the process. However, such models tend to perform poorly when predictions are made away from the observed data points. This dissertation contributes to model enhancement research by integrating physics-based model and statistical model to mitigate the individual drawbacks and provide models with better accuracy by combining the strengths of both models. The proposed model enhancement methodologies including the following two streams: (1) data-driven enhancement approach and (2) engineering-driven enhancement approach. Through these efforts, more adequate models are obtained, which leads to better performance in system forecasting, process monitoring and decision optimization. Among different data-driven enhancement approaches, Gaussian Process (GP) model provides a powerful methodology for calibrating a physical model in the presence of model uncertainties. However, if the data contain systematic experimental errors, the GP model can lead to an unnecessarily complex adjustment of the physical model. In Chapter 2, we proposed a novel enhancement procedure, named as “Minimal Adjustment”, which brings the physical model closer to the data by making minimal changes to it. This is achieved by approximating the GP model by a linear regression model and then applying a simultaneous variable selection of the model and experimental bias terms. Two real examples and simulations are presented to demonstrate the advantages of the proposed approach. Different from enhancing the model based on data-driven perspective, an alternative approach is to focus on adjusting the model by incorporating the additional domain or engineering knowledge when available. This often leads to models that are very simple and easy to interpret. The concepts of engineering-driven enhancement are carried out through two applications to demonstrate the proposed methodologies. In the first application where polymer composite quality is focused, nanoparticle dispersion has been identified as a crucial factor affecting the mechanical properties. Transmission Electron Microscopy (TEM) images are commonly used to represent nanoparticle dispersion without further quantifications on its characteristics. In Chapter 3, we developed the engineering-driven nonhomogeneous Poisson random field modeling strategy to characterize nanoparticle dispersion status of nanocomposite polymer, which quantitatively represents the nanomaterial quality presented through image data. The model parameters are estimated through the Bayesian MCMC technique to overcome the challenge of limited amount of accessible data due to the time consuming sampling schemes. The second application is to calibrate the engineering-driven force models of laser-assisted micro milling (LAMM) process statistically, which facilitates a systematic understanding and optimization of targeted processes. In Chapter 4, the force prediction interval has been derived by incorporating the variability in the runout parameters as well as the variability in the measured cutting forces. The experimental results indicate that the model predicts the cutting force profile with good accuracy using a 95% confidence interval. To conclude, this dissertation is the research drawing attention to model enhancement, which has considerable impacts on modeling, design, and optimization of various processes and systems. The fundamental methodologies of model enhancement are developed and further applied to various applications. These research activities developed engineering compliant models for adequate system predictions based on observational data with complex variable relationships and uncertainty, which facilitate process planning, monitoring, and real-time control.

  2. Developing a Data Driven Process-Based Model for Remote Sensing of Ecosystem Production

    NASA Astrophysics Data System (ADS)

    Elmasri, B.; Rahman, A. F.

    2010-12-01

    Estimating ecosystem carbon fluxes at various spatial and temporal scales is essential for quantifying the global carbon cycle. Numerous models have been developed for this purpose using several environmental variables as well as vegetation indices derived from remotely sensed data. Here we present a data driven modeling approach for gross primary production (GPP) that is based on a process based model BIOME-BGC. The proposed model was run using available remote sensing data and it does not depend on look-up tables. Furthermore, this approach combines the merits of both empirical and process models, and empirical models were used to estimate certain input variables such as light use efficiency (LUE). This was achieved by using remotely sensed data to the mathematical equations that represent biophysical photosynthesis processes in the BIOME-BGC model. Moreover, a new spectral index for estimating maximum photosynthetic activity, maximum photosynthetic rate index (MPRI), is also developed and presented here. This new index is based on the ratio between the near infrared and the green bands (ρ858.5/ρ555). The model was tested and validated against MODIS GPP product and flux measurements from two eddy covariance flux towers located at Morgan Monroe State Forest (MMSF) in Indiana and Harvard Forest in Massachusetts. Satellite data acquired by the Advanced Microwave Scanning Radiometer (AMSR-E) and MODIS were used. The data driven model showed a strong correlation between the predicted and measured GPP at the two eddy covariance flux towers sites. This methodology produced better predictions of GPP than did the MODIS GPP product. Moreover, the proportion of error in the predicted GPP for MMSF and Harvard forest was dominated by unsystematic errors suggesting that the results are unbiased. The analysis indicated that maintenance respiration is one of the main factors that dominate the overall model outcome errors and improvement in maintenance respiration estimation will result in improved GPP predictions. Although there might be a room for improvements in our model outcomes through improved parameterization, our results suggest that such a methodology for running BIOME-BGC model based entirely on routinely available data can produce good predictions of GPP.

  3. Prediction of high-energy radiation belt electron fluxes using a combined VERB-NARMAX model

    NASA Astrophysics Data System (ADS)

    Pakhotin, I. P.; Balikhin, M. A.; Shprits, Y.; Subbotin, D.; Boynton, R.

    2013-12-01

    This study is concerned with the modelling and forecasting of energetic electron fluxes that endanger satellites in space. By combining data-driven predictions from the NARMAX methodology with the physics-based VERB code, it becomes possible to predict electron fluxes with a high level of accuracy and across a radial distance from inside the local acceleration region to out beyond geosynchronous orbit. The model coupling also makes is possible to avoid accounting for seed electron variations at the outer boundary. Conversely, combining a convection code with the VERB and NARMAX models has the potential to provide even greater accuracy in forecasting that is not limited to geostationary orbit but makes predictions across the entire outer radiation belt region.

  4. ADM1-based methodology for the characterisation of the influent sludge in anaerobic reactors.

    PubMed

    Huete, E; de Gracia, M; Ayesa, E; Garcia-Heras, J L

    2006-01-01

    This paper presents a systematic methodology to characterise the influent sludge in terms of the ADM1 components from the experimental measurements traditionally used in wastewater engineering. For this purpose, a complete characterisation of the model components in their elemental mass fractions and charge has been used, making a rigorous mass balance for all the process transformations and enabling the future connection with other unit-process models. It also makes possible the application of mathematical algorithms for the optimal characterisation of several components poorly defined in the ADM1 report. Additionally, decay and disintegration have been necessarily uncoupled so that the decay proceeds directly to hydrolysis instead of producing intermediate composites. The proposed methodology has been applied to the particular experimental work of a pilot-scale CSTR treating real sewage sludge, a mixture of primary and secondary sludge. The results obtained have shown a good characterisation of the influent reflected in good model predictions. However, its limitations for an appropriate prediction of alkalinity and carbon percentages in biogas suggest the convenience of including the elemental characterisation of the process in terms of carbon in the analytical program.

  5. Fundamental Studies of Strength Physics--Methodology of Longevity Prediction of Materials under Arbitrary Thermally and Forced Effects

    ERIC Educational Resources Information Center

    Petrov, Mark G.

    2016-01-01

    Thermally activated analysis of experimental data allows considering about the structure features of each material. By modelling the structural heterogeneity of materials by means of rheological models, general and local plastic flows in metals and alloys can be described over. Based on physical fundamentals of failure and deformation of materials…

  6. Parental Participation and Retention in an Alcohol Preventive Family-Focused Programme

    ERIC Educational Resources Information Center

    Skarstrand, Eva; Branstrom, Richard; Sundell, Knut; Kallmen, Hakan; Andreassen, Sven

    2009-01-01

    Purpose: The purpose of this paper is to examine factors predicting parental participation and retention in a Swedish version of the Strengthening Families Programme (SFP). Design/methodology/approach: This study is based on data from a randomised controlled trial to evaluate the effects of the Swedish version of the SFP. The sample involves 441…

  7. An improved approach for flight readiness certification: Methodology for failure risk assessment and application examples. Volume 3: Structure and listing of programs

    NASA Technical Reports Server (NTRS)

    Moore, N. R.; Ebbeler, D. H.; Newlin, L. E.; Sutharshana, S.; Creager, M.

    1992-01-01

    An improved methodology for quantitatively evaluating failure risk of spaceflight systems to assess flight readiness and identify risk control measures is presented. This methodology, called Probabilistic Failure Assessment (PFA), combines operating experience from tests and flights with engineering analysis to estimate failure risk. The PFA methodology is of particular value when information on which to base an assessment of failure risk, including test experience and knowledge of parameters used in engineering analyses of failure phenomena, is expensive or difficult to acquire. The PFA methodology is a prescribed statistical structure in which engineering analysis models that characterize failure phenomena are used conjointly with uncertainties about analysis parameters and/or modeling accuracy to estimate failure probability distributions for specific failure modes. These distributions can then be modified, by means of statistical procedures of the PFA methodology, to reflect any test or flight experience. Conventional engineering analysis models currently employed for design of failure prediction are used in this methodology. The PFA methodology is described and examples of its application are presented. Conventional approaches to failure risk evaluation for spaceflight systems are discussed, and the rationale for the approach taken in the PFA methodology is presented. The statistical methods, engineering models, and computer software used in fatigue failure mode applications are thoroughly documented.

  8. Could the outcome of the 2016 US elections have been predicted from past voting patterns?

    NASA Astrophysics Data System (ADS)

    Schmitz, Peter M. U.; Holloway, Jennifer P.; Dudeni-Tlhone, Nontembeko; Ntlangu, Mbulelo B.; Koen, Renee

    2018-05-01

    In South Africa, a team of analysts has for some years been using statistical techniques to predict election outcomes during election nights in South Africa. The prediction method involves using statistical clusters based on past voting patterns to predict final election outcomes, using a small number of released vote counts. With the US presidential elections in November 2016 hitting the global media headlines during the time period directly after successful predictions were done for the South African elections, the team decided to investigate adapting their meth-od to forecast the final outcome in the US elections. In particular, it was felt that the time zone differences between states would affect the time at which results are released and thereby provide a window of opportunity for doing election night prediction using only the early results from the eastern side of the US. Testing the method on the US presidential elections would have two advantages: it would determine whether the core methodology could be generalised, and whether it would work to include a stronger spatial element in the modelling, since the early results released would be spatially biased due to time zone differences. This paper presents a high-level view of the overall methodology and how it was adapted to predict the results of the US presidential elections. A discussion on the clustering of spatial units within the US is also provided and the spatial distribution of results together with the Electoral College prediction results from both a `test-run' and the final 2016 presidential elections are given and analysed.

  9. The application of molecular modelling in the safety assessment of chemicals: A case study on ligand-dependent PPARγ dysregulation.

    PubMed

    Al Sharif, Merilin; Tsakovska, Ivanka; Pajeva, Ilza; Alov, Petko; Fioravanzo, Elena; Bassan, Arianna; Kovarich, Simona; Yang, Chihae; Mostrag-Szlichtyng, Aleksandra; Vitcheva, Vessela; Worth, Andrew P; Richarz, Andrea-N; Cronin, Mark T D

    2017-12-01

    The aim of this paper was to provide a proof of concept demonstrating that molecular modelling methodologies can be employed as a part of an integrated strategy to support toxicity prediction consistent with the mode of action/adverse outcome pathway (MoA/AOP) framework. To illustrate the role of molecular modelling in predictive toxicology, a case study was undertaken in which molecular modelling methodologies were employed to predict the activation of the peroxisome proliferator-activated nuclear receptor γ (PPARγ) as a potential molecular initiating event (MIE) for liver steatosis. A stepwise procedure combining different in silico approaches (virtual screening based on docking and pharmacophore filtering, and molecular field analysis) was developed to screen for PPARγ full agonists and to predict their transactivation activity (EC 50 ). The performance metrics of the classification model to predict PPARγ full agonists were balanced accuracy=81%, sensitivity=85% and specificity=76%. The 3D QSAR model developed to predict EC 50 of PPARγ full agonists had the following statistical parameters: q 2 cv =0.610, N opt =7, SEP cv =0.505, r 2 pr =0.552. To support the linkage of PPARγ agonism predictions to prosteatotic potential, molecular modelling was combined with independently performed mechanistic mining of available in vivo toxicity data followed by ToxPrint chemotypes analysis. The approaches investigated demonstrated a potential to predict the MIE, to facilitate the process of MoA/AOP elaboration, to increase the scientific confidence in AOP, and to become a basis for 3D chemotype development. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Predicting plant biomass accumulation from image-derived parameters

    PubMed Central

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  11. Integrated Systems Health Management (ISHM) Toolkit

    NASA Technical Reports Server (NTRS)

    Venkatesh, Meera; Kapadia, Ravi; Walker, Mark; Wilkins, Kim

    2013-01-01

    A framework of software components has been implemented to facilitate the development of ISHM systems according to a methodology based on Reliability Centered Maintenance (RCM). This framework is collectively referred to as the Toolkit and was developed using General Atomics' Health MAP (TM) technology. The toolkit is intended to provide assistance to software developers of mission-critical system health monitoring applications in the specification, implementation, configuration, and deployment of such applications. In addition to software tools designed to facilitate these objectives, the toolkit also provides direction to software developers in accordance with an ISHM specification and development methodology. The development tools are based on an RCM approach for the development of ISHM systems. This approach focuses on defining, detecting, and predicting the likelihood of system functional failures and their undesirable consequences.

  12. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks.

    PubMed

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used.

  13. Software Design Challenges in Time Series Prediction Systems Using Parallel Implementation of Artificial Neural Networks

    PubMed Central

    Manikandan, Narayanan; Subha, Srinivasan

    2016-01-01

    Software development life cycle has been characterized by destructive disconnects between activities like planning, analysis, design, and programming. Particularly software developed with prediction based results is always a big challenge for designers. Time series data forecasting like currency exchange, stock prices, and weather report are some of the areas where an extensive research is going on for the last three decades. In the initial days, the problems with financial analysis and prediction were solved by statistical models and methods. For the last two decades, a large number of Artificial Neural Networks based learning models have been proposed to solve the problems of financial data and get accurate results in prediction of the future trends and prices. This paper addressed some architectural design related issues for performance improvement through vectorising the strengths of multivariate econometric time series models and Artificial Neural Networks. It provides an adaptive approach for predicting exchange rates and it can be called hybrid methodology for predicting exchange rates. This framework is tested for finding the accuracy and performance of parallel algorithms used. PMID:26881271

  14. Prediction of brain tissue temperature using near-infrared spectroscopy.

    PubMed

    Holper, Lisa; Mitra, Subhabrata; Bale, Gemma; Robertson, Nicola; Tachtsidis, Ilias

    2017-04-01

    Broadband near-infrared spectroscopy (NIRS) can provide an endogenous indicator of tissue temperature based on the temperature dependence of the water absorption spectrum. We describe a first evaluation of the calibration and prediction of brain tissue temperature obtained during hypothermia in newborn piglets (animal dataset) and rewarming in newborn infants (human dataset) based on measured body (rectal) temperature. The calibration using partial least squares regression proved to be a reliable method to predict brain tissue temperature with respect to core body temperature in the wavelength interval of 720 to 880 nm with a strong mean predictive power of [Formula: see text] (animal dataset) and [Formula: see text] (human dataset). In addition, we applied regression receiver operating characteristic curves for the first time to evaluate the temperature prediction, which provided an overall mean error bias between NIRS predicted brain temperature and body temperature of [Formula: see text] (animal dataset) and [Formula: see text] (human dataset). We discuss main methodological aspects, particularly the well-known aspect of over- versus underestimation between brain and body temperature, which is relevant for potential clinical applications.

  15. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  16. Meta-path based heterogeneous combat network link prediction

    NASA Astrophysics Data System (ADS)

    Li, Jichao; Ge, Bingfeng; Yang, Kewei; Chen, Yingwu; Tan, Yuejin

    2017-09-01

    The combat system-of-systems in high-tech informative warfare, composed of many interconnected combat systems of different types, can be regarded as a type of complex heterogeneous network. Link prediction for heterogeneous combat networks (HCNs) is of significant military value, as it facilitates reconfiguring combat networks to represent the complex real-world network topology as appropriate with observed information. This paper proposes a novel integrated methodology framework called HCNMP (HCN link prediction based on meta-path) to predict multiple types of links simultaneously for an HCN. More specifically, the concept of HCN meta-paths is introduced, through which the HCNMP can accumulate information by extracting different features of HCN links for all the six defined types. Next, an HCN link prediction model, based on meta-path features, is built to predict all types of links of the HCN simultaneously. Then, the solution algorithm for the HCN link prediction model is proposed, in which the prediction results are obtained by iteratively updating with the newly predicted results until the results in the HCN converge or reach a certain maximum iteration number. Finally, numerical experiments on the dataset of a real HCN are conducted to demonstrate the feasibility and effectiveness of the proposed HCNMP, in comparison with 30 baseline methods. The results show that the performance of the HCNMP is superior to those of the baseline methods.

  17. Extended cooperative control synthesis

    NASA Technical Reports Server (NTRS)

    Davidson, John B.; Schmidt, David K.

    1994-01-01

    This paper reports on research for extending the Cooperative Control Synthesis methodology to include a more accurate modeling of the pilot's controller dynamics. Cooperative Control Synthesis (CCS) is a methodology that addresses the problem of how to design control laws for piloted, high-order, multivariate systems and/or non-conventional dynamic configurations in the absence of flying qualities specifications. This is accomplished by emphasizing the parallel structure inherent in any pilot-controlled, augmented vehicle. The original CCS methodology is extended to include the Modified Optimal Control Model (MOCM), which is based upon the optimal control model of the human operator developed by Kleinman, Baron, and Levison in 1970. This model provides a modeling of the pilot's compensation dynamics that is more accurate than the simplified pilot dynamic representation currently in the CCS methodology. Inclusion of the MOCM into the CCS also enables the modeling of pilot-observation perception thresholds and pilot-observation attention allocation affects. This Extended Cooperative Control Synthesis (ECCS) allows for the direct calculation of pilot and system open- and closed-loop transfer functions in pole/zero form and is readily implemented in current software capable of analysis and design for dynamic systems. Example results based upon synthesizing an augmentation control law for an acceleration command system in a compensatory tracking task using the ECCS are compared with a similar synthesis performed by using the original CCS methodology. The ECCS is shown to provide augmentation control laws that yield more favorable, predicted closed-loop flying qualities and tracking performance than those synthesized using the original CCS methodology.

  18. PCB congener analysis with Hall electrolytic conductivity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edstrom, R.D.

    1989-01-01

    This work reports the development of an analytical methodology for the analysis of PCB congeners based on integrating relative retention data provided by other researchers. The retention data were transposed into a multiple retention marker system which provided good precision in the calculation of relative retention indices for PCB congener analysis. Analytical run times for the developed methodology were approximately one hour using a commercially available GC capillary column. A Tracor Model 700A Hall Electrolytic Conductivity Detector (HECD) was employed in the GC detection of Aroclor standards and environmental samples. Responses by the HECD provided good sensitivity and were reasonablymore » predictable. Ten response factors were calculated based on the molar chlorine content of each homolog group. Homolog distributions were determined for Aroclors 1016, 1221, 1232, 1242, 1248, 1254, 1260, 1262 along with binary and ternary mixtures of the same. These distributions were compared with distributions reported by other researchers using electron capture detection as well as chemical ionization mass spectrometric methodologies. Homolog distributions acquired by the HECD methodology showed good correlation with the previously mentioned methodologies. The developed analytical methodology was used in the analysis of bluefish (Pomatomas saltatrix) and weakfish (Cynoscion regalis) collected from the York River, lower James River and lower Chesapeake Bay in Virginia. Total PCB concentrations were calculated and homolog distributions were constructed from the acquired data. Increases in total PCB concentrations were found in the analyzed fish samples during the fall of 1985 collected from the lower James River and lower Chesapeake Bay.« less

  19. Prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy: a systematic review and external validation study.

    PubMed

    Hilkens, N A; Algra, A; Greving, J P

    2016-01-01

    ESSENTIALS: Prediction models may help to identify patients at high risk of bleeding on antiplatelet therapy. We identified existing prediction models for bleeding and validated them in patients with cerebral ischemia. Five prediction models were identified, all of which had some methodological shortcomings. Performance in patients with cerebral ischemia was poor. Background Antiplatelet therapy is widely used in secondary prevention after a transient ischemic attack (TIA) or ischemic stroke. Bleeding is the main adverse effect of antiplatelet therapy and is potentially life threatening. Identification of patients at increased risk of bleeding may help target antiplatelet therapy. This study sought to identify existing prediction models for intracranial hemorrhage or major bleeding in patients on antiplatelet therapy and evaluate their performance in patients with cerebral ischemia. We systematically searched PubMed and Embase for existing prediction models up to December 2014. The methodological quality of the included studies was assessed with the CHARMS checklist. Prediction models were externally validated in the European Stroke Prevention Study 2, comprising 6602 patients with a TIA or ischemic stroke. We assessed discrimination and calibration of included prediction models. Five prediction models were identified, of which two were developed in patients with previous cerebral ischemia. Three studies assessed major bleeding, one studied intracerebral hemorrhage and one gastrointestinal bleeding. None of the studies met all criteria of good quality. External validation showed poor discriminative performance, with c-statistics ranging from 0.53 to 0.64 and poor calibration. A limited number of prediction models is available that predict intracranial hemorrhage or major bleeding in patients on antiplatelet therapy. The methodological quality of the models varied, but was generally low. Predictive performance in patients with cerebral ischemia was poor. In order to reliably predict the risk of bleeding in patients with cerebral ischemia, development of a prediction model according to current methodological standards is needed. © 2015 International Society on Thrombosis and Haemostasis.

  20. Radiation from advanced solid rocket motor plumes

    NASA Technical Reports Server (NTRS)

    Farmer, Richard C.; Smith, Sheldon D.; Myruski, Brian L.

    1994-01-01

    The overall objective of this study was to develop an understanding of solid rocket motor (SRM) plumes in sufficient detail to accurately explain the majority of plume radiation test data. Improved flowfield and radiation analysis codes were developed to accurately and efficiently account for all the factors which effect radiation heating from rocket plumes. These codes were verified by comparing predicted plume behavior with measured NASA/MSFC ASRM test data. Upon conducting a thorough review of the current state-of-the-art of SRM plume flowfield and radiation prediction methodology and the pertinent data base, the following analyses were developed for future design use. The NOZZRAD code was developed for preliminary base heating design and Al2O3 particle optical property data evaluation using a generalized two-flux solution to the radiative transfer equation. The IDARAD code was developed for rapid evaluation of plume radiation effects using the spherical harmonics method of differential approximation to the radiative transfer equation. The FDNS CFD code with fully coupled Euler-Lagrange particle tracking was validated by comparison to predictions made with the industry standard RAMP code for SRM nozzle flowfield analysis. The FDNS code provides the ability to analyze not only rocket nozzle flow, but also axisymmetric and three-dimensional plume flowfields with state-of-the-art CFD methodology. Procedures for conducting meaningful thermo-vision camera studies were developed.

  1. Assessment of cluster yield components by image analysis.

    PubMed

    Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose

    2015-04-01

    Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.

  2. Baseline Computational Fluid Dynamics Methodology for Longitudinal-Mode Liquid-Propellant Rocket Combustion Instability

    NASA Technical Reports Server (NTRS)

    Litchford, R. J.

    2005-01-01

    A computational method for the analysis of longitudinal-mode liquid rocket combustion instability has been developed based on the unsteady, quasi-one-dimensional Euler equations where the combustion process source terms were introduced through the incorporation of a two-zone, linearized representation: (1) A two-parameter collapsed combustion zone at the injector face, and (2) a two-parameter distributed combustion zone based on a Lagrangian treatment of the propellant spray. The unsteady Euler equations in inhomogeneous form retain full hyperbolicity and are integrated implicitly in time using second-order, high-resolution, characteristic-based, flux-differencing spatial discretization with Roe-averaging of the Jacobian matrix. This method was initially validated against an analytical solution for nonreacting, isentropic duct acoustics with specified admittances at the inflow and outflow boundaries. For small amplitude perturbations, numerical predictions for the amplification coefficient and oscillation period were found to compare favorably with predictions from linearized small-disturbance theory as long as the grid exceeded a critical density (100 nodes/wavelength). The numerical methodology was then exercised on a generic combustor configuration using both collapsed and distributed combustion zone models with a short nozzle admittance approximation for the outflow boundary. In these cases, the response parameters were varied to determine stability limits defining resonant coupling onset.

  3. Yield estimation of corn with multispectral data and the potential of using imaging spectrometers

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1997-05-01

    In the frame of the special yield estimation, a regular procedure conducted for the European Union to more accurately estimate agricultural yield, a project was conducted for the state minister for Rural Environment, Food and Forestry of Baden-Wuerttemberg, Germany) to test remote sensing data with advanced yield formation models for accuracy and timelines of yield estimation of corn. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on 4 LANDSAT-derived estimates and daily meteorological data the grain yield of corn stands was determined for 1995. The modeled yield was compared with results independently gathered within the special yield estimation for 23 test fields in the Upper Rhine Valley. The agrement between LANDSAT-based estimates and Special Yield Estimation shows a relative error of 2.3 percent. The comparison of the results for single fields shows, that six weeks before harvest the grain yield of single corn fields was estimated with a mean relative accuracy of 13 percent using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results or yield prediction with remote sensing.

  4. [Optimal extraction of effective constituents from Aralia elata by central composite design and response surface methodology].

    PubMed

    Lv, Shao-Wa; Liu, Dong; Hu, Pan-Pan; Ye, Xu-Yan; Xiao, Hong-Bin; Kuang, Hai-Xue

    2010-03-01

    To optimize the process of extracting effective constituents from Aralia elata by response surface methodology. The independent variables were ethanol concentration, reflux time and solvent fold, the dependent variable was extraction rate of total saponins in Aralia elata. Linear or no-linear mathematic models were used to estimate the relationship between independent and dependent variables. Response surface methodology was used to optimize the process of extraction. The prediction was carried out through comparing the observed and predicted values. Regression coefficient of binomial fitting complex model was as high as 0.9617, the optimum conditions of extraction process were 70% ethanol, 2.5 hours for reflux, 20-fold solvent and 3 times for extraction. The bias between observed and predicted values was -2.41%. It shows the optimum model is highly predictive.

  5. Service lifetime prediction for encapsulated photovoltaic cells/minimodules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czanderna, A.W.; Jorgensen, G.J.

    The overall purposes of this paper are to elucidate the crucial importance of predicting the service lifetime (SLP) for photovoltaics (PV) modules and to present an outline for developing a SLP methodology for encapsulated PV cells and minimodules. The specific objectives are (a) to illustrate the generic nature of SLP for several types of solar energy conversion or conversion devices, (b) to summarize the major durability issues concerned with these devices, (c) to justify using SLP in the triad of cost, performance, and durability instead of only durability, (d) to define and explain the seven major elements that comprise amore » generic SLP methodology, (e) to provide background about implementing the SLP methodology for PV cells and minimodules including the complexity of the encapsulation problems, (f) to summarize briefly the past focus of our task for improving and/or replacing ethylene vinyl acetate (EVA) as a PV pottant, and (g) to provide an outline of our present and future studies using encapsulated PV cells and minimodules for improving the encapsulation of PV cells and predicting a service lifetime for them using the SLP methodology outlined in objective (d). By using this methodology, our major conclusion is that predicting the service lifetime of PV cells and minimodules is possible. {copyright} {ital 1997 American Institute of Physics.}« less

  6. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration

    NASA Astrophysics Data System (ADS)

    Godinho, Robson B.; Santos, Mauricio C.; Poppi, Ronei J.

    2016-03-01

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies.

  7. An ex vivo approach to botanical-drug interactions: a proof of concept study.

    PubMed

    Wang, Xinwen; Zhu, Hao-Jie; Munoz, Juliana; Gurley, Bill J; Markowitz, John S

    2015-04-02

    Botanical medicines are frequently used in combination with therapeutic drugs, imposing a risk for harmful botanical-drug interactions (BDIs). Among the existing BDI evaluation methods, clinical studies are the most desirable, but due to their expense and protracted time-line for completion, conventional in vitro methodologies remain the most frequently used BDI assessment tools. However, many predictions generated from in vitro studies are inconsistent with clinical findings. Accordingly, the present study aimed to develop a novel ex vivo approach for BDI assessment and expand the safety evaluation methodology in applied ethnopharmacological research. This approach differs from conventional in vitro methods in that rather than botanical extracts or individual phytochemicals being prepared in artificial buffers, human plasma/serum collected from a limited number of subjects administered botanical supplements was utilized to assess BDIs. To validate the methodology, human plasma/serum samples collected from healthy subjects administered either milk thistle or goldenseal extracts were utilized in incubation studies to determine their potential inhibitory effects on CYP2C9 and CYP3A4/5, respectively. Silybin A and B, two principal milk thistle phytochemicals, and hydrastine and berberine, the purported active constituents in goldenseal, were evaluated in both phosphate buffer and human plasma based in vitro incubation systems. Ex vivo study results were consistent with formal clinical study findings for the effect of milk thistle on the disposition of tolbutamide, a CYP2C9 substrate, and for goldenseal׳s influence on the pharmacokinetics of midazolam, a widely accepted CYP3A4/5 substrate. Compared to conventional in vitro BDI methodologies of assessment, the introduction of human plasma into the in vitro study model changed the observed inhibitory effect of silybin A, silybin B and hydrastine and berberine on CYP2C9 and CYP3A4/5, respectively, results which more closely mirrored those generated in clinical study. Data from conventional buffer-based in vitro studies were less predictive than the ex vivo assessments. Thus, this novel ex vivo approach may be more effective at predicting clinically relevant BDIs than conventional in vitro methods. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Semi-Supervised Learning of Lift Optimization of Multi-Element Three-Segment Variable Camber Airfoil

    NASA Technical Reports Server (NTRS)

    Kaul, Upender K.; Nguyen, Nhan T.

    2017-01-01

    This chapter describes a new intelligent platform for learning optimal designs of morphing wings based on Variable Camber Continuous Trailing Edge Flaps (VCCTEF) in conjunction with a leading edge flap called the Variable Camber Krueger (VCK). The new platform consists of a Computational Fluid Dynamics (CFD) methodology coupled with a semi-supervised learning methodology. The CFD component of the intelligent platform comprises of a full Navier-Stokes solution capability (NASA OVERFLOW solver with Spalart-Allmaras turbulence model) that computes flow over a tri-element inboard NASA Generic Transport Model (GTM) wing section. Various VCCTEF/VCK settings and configurations were considered to explore optimal design for high-lift flight during take-off and landing. To determine globally optimal design of such a system, an extremely large set of CFD simulations is needed. This is not feasible to achieve in practice. To alleviate this problem, a recourse was taken to a semi-supervised learning (SSL) methodology, which is based on manifold regularization techniques. A reasonable space of CFD solutions was populated and then the SSL methodology was used to fit this manifold in its entirety, including the gaps in the manifold where there were no CFD solutions available. The SSL methodology in conjunction with an elastodynamic solver (FiDDLE) was demonstrated in an earlier study involving structural health monitoring. These CFD-SSL methodologies define the new intelligent platform that forms the basis for our search for optimal design of wings. Although the present platform can be used in various other design and operational problems in engineering, this chapter focuses on the high-lift study of the VCK-VCCTEF system. Top few candidate design configurations were identified by solving the CFD problem in a small subset of the design space. The SSL component was trained on the design space, and was then used in a predictive mode to populate a selected set of test points outside of the given design space. The new design test space thus populated was evaluated by using the CFD component by determining the error between the SSL predictions and the true (CFD) solutions, which was found to be small. This demonstrates the proposed CFD-SSL methodologies for isolating the best design of the VCK-VCCTEF system, and it holds promise for quantitatively identifying best designs of flight systems, in general.

  9. Semen parameters can be predicted from environmental factors and lifestyle using artificial intelligence methods.

    PubMed

    Girela, Jose L; Gil, David; Johnsson, Magnus; Gomez-Torres, María José; De Juan, Joaquín

    2013-04-01

    Fertility rates have dramatically decreased in the last two decades, especially in men. It has been described that environmental factors as well as life habits may affect semen quality. In this paper we use artificial intelligence techniques in order to predict semen characteristics resulting from environmental factors, life habits, and health status, with these techniques constituting a possible decision support system that can help in the study of male fertility potential. A total of 123 young, healthy volunteers provided a semen sample that was analyzed according to the World Health Organization 2010 criteria. They also were asked to complete a validated questionnaire about life habits and health status. Sperm concentration and percentage of motile sperm were related to sociodemographic data, environmental factors, health status, and life habits in order to determine the predictive accuracy of a multilayer perceptron network, a type of artificial neural network. In conclusion, we have developed an artificial neural network that can predict the results of the semen analysis based on the data collected by the questionnaire. The semen parameter that is best predicted using this methodology is the sperm concentration. Although the accuracy for motility is slightly lower than that for concentration, it is possible to predict it with a significant degree of accuracy. This methodology can be a useful tool in early diagnosis of patients with seminal disorders or in the selection of candidates to become semen donors.

  10. A Symbiotic Framework for coupling Machine Learning and Geosciences in Prediction and Predictability

    NASA Astrophysics Data System (ADS)

    Ravela, S.

    2017-12-01

    In this presentation we review the two directions of a symbiotic relationship between machine learning and the geosciences in relation to prediction and predictability. In the first direction, we develop ensemble, information theoretic and manifold learning framework to adaptively improve state and parameter estimates in nonlinear high-dimensional non-Gaussian problems, showing in particular that tractable variational approaches can be produced. We demonstrate these applications in the context of autonomous mapping of environmental coherent structures and other idealized problems. In the reverse direction, we show that data assimilation, particularly probabilistic approaches for filtering and smoothing offer a novel and useful way to train neural networks, and serve as a better basis than gradient based approaches when we must quantify uncertainty in association with nonlinear, chaotic processes. In many inference problems in geosciences we seek to build reduced models to characterize local sensitivies, adjoints or other mechanisms that propagate innovations and errors. Here, the particular use of neural approaches for such propagation trained using ensemble data assimilation provides a novel framework. Through these two examples of inference problems in the earth sciences, we show that not only is learning useful to broaden existing methodology, but in reverse, geophysical methodology can be used to influence paradigms in learning.

  11. Application of Artificial Neural Network and Response Surface Methodology in Modeling of Surface Roughness in WS2 Solid Lubricant Assisted MQL Turning of Inconel 718

    NASA Astrophysics Data System (ADS)

    Maheshwera Reddy Paturi, Uma; Devarasetti, Harish; Abimbola Fadare, David; Reddy Narala, Suresh Kumar

    2018-04-01

    In the present paper, the artificial neural network (ANN) and response surface methodology (RSM) are used in modeling of surface roughness in WS2 (tungsten disulphide) solid lubricant assisted minimal quantity lubrication (MQL) machining. The real time MQL turning of Inconel 718 experimental data considered in this paper was available in the literature [1]. In ANN modeling, performance parameters such as mean square error (MSE), mean absolute percentage error (MAPE) and average error in prediction (AEP) for the experimental data were determined based on Levenberg–Marquardt (LM) feed forward back propagation training algorithm with tansig as transfer function. The MATLAB tool box has been utilized in training and testing of neural network model. Neural network model with three input neurons, one hidden layer with five neurons and one output neuron (3-5-1 architecture) is found to be most confidence and optimal. The coefficient of determination (R2) for both the ANN and RSM model were seen to be 0.998 and 0.982 respectively. The surface roughness predictions from ANN and RSM model were related with experimentally measured values and found to be in good agreement with each other. However, the prediction efficacy of ANN model is relatively high when compared with RSM model predictions.

  12. A hybrid approach to survival model building using integration of clinical and molecular information in censored data.

    PubMed

    Choi, Ickwon; Kattan, Michael W; Wells, Brian J; Yu, Changhong

    2012-01-01

    In medical society, the prognostic models, which use clinicopathologic features and predict prognosis after a certain treatment, have been externally validated and used in practice. In recent years, most research has focused on high dimensional genomic data and small sample sizes. Since clinically similar but molecularly heterogeneous tumors may produce different clinical outcomes, the combination of clinical and genomic information, which may be complementary, is crucial to improve the quality of prognostic predictions. However, there is a lack of an integrating scheme for clinic-genomic models due to the P ≥ N problem, in particular, for a parsimonious model. We propose a methodology to build a reduced yet accurate integrative model using a hybrid approach based on the Cox regression model, which uses several dimension reduction techniques, L₂ penalized maximum likelihood estimation (PMLE), and resampling methods to tackle the problem. The predictive accuracy of the modeling approach is assessed by several metrics via an independent and thorough scheme to compare competing methods. In breast cancer data studies on a metastasis and death event, we show that the proposed methodology can improve prediction accuracy and build a final model with a hybrid signature that is parsimonious when integrating both types of variables.

  13. Strategies to design clinical studies to identify predictive biomarkers in cancer research.

    PubMed

    Perez-Gracia, Jose Luis; Sanmamed, Miguel F; Bosch, Ana; Patiño-Garcia, Ana; Schalper, Kurt A; Segura, Victor; Bellmunt, Joaquim; Tabernero, Josep; Sweeney, Christopher J; Choueiri, Toni K; Martín, Miguel; Fusco, Juan Pablo; Rodriguez-Ruiz, Maria Esperanza; Calvo, Alfonso; Prior, Celia; Paz-Ares, Luis; Pio, Ruben; Gonzalez-Billalabeitia, Enrique; Gonzalez Hernandez, Alvaro; Páez, David; Piulats, Jose María; Gurpide, Alfonso; Andueza, Mapi; de Velasco, Guillermo; Pazo, Roberto; Grande, Enrique; Nicolas, Pilar; Abad-Santos, Francisco; Garcia-Donas, Jesus; Castellano, Daniel; Pajares, María J; Suarez, Cristina; Colomer, Ramon; Montuenga, Luis M; Melero, Ignacio

    2017-02-01

    The discovery of reliable biomarkers to predict efficacy and toxicity of anticancer drugs remains one of the key challenges in cancer research. Despite its relevance, no efficient study designs to identify promising candidate biomarkers have been established. This has led to the proliferation of a myriad of exploratory studies using dissimilar strategies, most of which fail to identify any promising targets and are seldom validated. The lack of a proper methodology also determines that many anti-cancer drugs are developed below their potential, due to failure to identify predictive biomarkers. While some drugs will be systematically administered to many patients who will not benefit from them, leading to unnecessary toxicities and costs, others will never reach registration due to our inability to identify the specific patient population in which they are active. Despite these drawbacks, a limited number of outstanding predictive biomarkers have been successfully identified and validated, and have changed the standard practice of oncology. In this manuscript, a multidisciplinary panel reviews how those key biomarkers were identified and, based on those experiences, proposes a methodological framework-the DESIGN guidelines-to standardize the clinical design of biomarker identification studies and to develop future research in this pivotal field. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Yield estimation of corn based on multitemporal LANDSAT-TM data as input for an agrometeorological model

    NASA Astrophysics Data System (ADS)

    Bach, Heike

    1998-07-01

    In order to test remote sensing data with advanced yield formation models for accuracy and timeliness of yield estimation of corn, a project was conducted for the State Ministry for Rural Environment, Food, and Forestry of Baden-Württemberg (Germany). This project was carried out during the course of the `Special Yield Estimation', a regular procedure conducted for the European Union, to more accurately estimate agricultural yield. The methodology employed uses field-based plant parameter estimation from atmospherically corrected multitemporal/multispectral LANDSAT-TM data. An agrometeorological plant-production-model is used for yield prediction. Based solely on four LANDSAT-derived estimates (between May and August) and daily meteorological data, the grain yield of corn fields was determined for 1995. The modelled yields were compared with results gathered independently within the Special Yield Estimation for 23 test fields in the upper Rhine valley. The agreement between LANDSAT-based estimates (six weeks before harvest) and Special Yield Estimation (at harvest) shows a relative error of 2.3%. The comparison of the results for single fields shows that six weeks before harvest, the grain yield of corn was estimated with a mean relative accuracy of 13% using satellite information. The presented methodology can be transferred to other crops and geographical regions. For future applications hyperspectral sensors show great potential to further enhance the results for yield prediction with remote sensing.

  16. Probabilistic sizing of laminates with uncertainties

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Liaw, D. G.; Chamis, C. C.

    1993-01-01

    A reliability based design methodology for laminate sizing and configuration for a special case of composite structures is described. The methodology combines probabilistic composite mechanics with probabilistic structural analysis. The uncertainties of constituent materials (fiber and matrix) to predict macroscopic behavior are simulated using probabilistic theory. Uncertainties in the degradation of composite material properties are included in this design methodology. A multi-factor interaction equation is used to evaluate load and environment dependent degradation of the composite material properties at the micromechanics level. The methodology is integrated into a computer code IPACS (Integrated Probabilistic Assessment of Composite Structures). Versatility of this design approach is demonstrated by performing a multi-level probabilistic analysis to size the laminates for design structural reliability of random type structures. The results show that laminate configurations can be selected to improve the structural reliability from three failures in 1000, to no failures in one million. Results also show that the laminates with the highest reliability are the least sensitive to the loading conditions.

  17. Receptor-based 3D-QSAR in Drug Design: Methods and Applications in Kinase Studies.

    PubMed

    Fang, Cheng; Xiao, Zhiyan

    2016-01-01

    Receptor-based 3D-QSAR strategy represents a superior integration of structure-based drug design (SBDD) and three-dimensional quantitative structure-activity relationship (3D-QSAR) analysis. It combines the accurate prediction of ligand poses by the SBDD approach with the good predictability and interpretability of statistical models derived from the 3D-QSAR approach. Extensive efforts have been devoted to the development of receptor-based 3D-QSAR methods and two alternative approaches have been exploited. One associates with computing the binding interactions between a receptor and a ligand to generate structure-based descriptors for QSAR analyses. The other concerns the application of various docking protocols to generate optimal ligand poses so as to provide reliable molecular alignments for the conventional 3D-QSAR operations. This review highlights new concepts and methodologies recently developed in the field of receptorbased 3D-QSAR, and in particular, covers its application in kinase studies.

  18. Probability-based methodology for buckling investigation of sandwich composite shells with and without cut-outs

    NASA Astrophysics Data System (ADS)

    Alfano, M.; Bisagni, C.

    2017-01-01

    The objective of the running EU project DESICOS (New Robust DESign Guideline for Imperfection Sensitive COmposite Launcher Structures) is to formulate an improved shell design methodology in order to meet the demand of aerospace industry for lighter structures. Within the project, this article discusses the development of a probability-based methodology developed at Politecnico di Milano. It is based on the combination of the Stress-Strength Interference Method and the Latin Hypercube Method with the aim to predict the bucking response of three sandwich composite cylindrical shells, assuming a loading condition of pure compression. The three shells are made of the same material, but have different stacking sequence and geometric dimensions. One of them presents three circular cut-outs. Different types of input imperfections, treated as random variables, are taken into account independently and in combination: variability in longitudinal Young's modulus, ply misalignment, geometric imperfections, and boundary imperfections. The methodology enables a first assessment of the structural reliability of the shells through the calculation of a probabilistic buckling factor for a specified level of probability. The factor depends highly on the reliability level, on the number of adopted samples, and on the assumptions made in modeling the input imperfections. The main advantage of the developed procedure is the versatility, as it can be applied to the buckling analysis of laminated composite shells and sandwich composite shells including different types of imperfections.

  19. Development and validation of a physiology-based model for the prediction of pharmacokinetics/toxicokinetics in rabbits

    PubMed Central

    Hermes, Helen E.; Teutonico, Donato; Preuss, Thomas G.; Schneckener, Sebastian

    2018-01-01

    The environmental fates of pharmaceuticals and the effects of crop protection products on non-target species are subjects that are undergoing intense review. Since measuring the concentrations and effects of xenobiotics on all affected species under all conceivable scenarios is not feasible, standard laboratory animals such as rabbits are tested, and the observed adverse effects are translated to focal species for environmental risk assessments. In that respect, mathematical modelling is becoming increasingly important for evaluating the consequences of pesticides in untested scenarios. In particular, physiologically based pharmacokinetic/toxicokinetic (PBPK/TK) modelling is a well-established methodology used to predict tissue concentrations based on the absorption, distribution, metabolism and excretion of drugs and toxicants. In the present work, a rabbit PBPK/TK model is developed and evaluated with data available from the literature. The model predictions include scenarios of both intravenous (i.v.) and oral (p.o.) administration of small and large compounds. The presented rabbit PBPK/TK model predicts the pharmacokinetics (Cmax, AUC) of the tested compounds with an average 1.7-fold error. This result indicates a good predictive capacity of the model, which enables its use for risk assessment modelling and simulations. PMID:29561908

  20. Optimal Modality Selection for Cooperative Human-Robot Task Completion.

    PubMed

    Jacob, Mithun George; Wachs, Juan P

    2016-12-01

    Human-robot cooperation in complex environments must be fast, accurate, and resilient. This requires efficient communication channels where robots need to assimilate information using a plethora of verbal and nonverbal modalities such as hand gestures, speech, and gaze. However, even though hybrid human-robot communication frameworks and multimodal communication have been studied, a systematic methodology for designing multimodal interfaces does not exist. This paper addresses the gap by proposing a novel methodology to generate multimodal lexicons which maximizes multiple performance metrics over a wide range of communication modalities (i.e., lexicons). The metrics are obtained through a mixture of simulation and real-world experiments. The methodology is tested in a surgical setting where a robot cooperates with a surgeon to complete a mock abdominal incision and closure task by delivering surgical instruments. Experimental results show that predicted optimal lexicons significantly outperform predicted suboptimal lexicons (p <; 0.05) in all metrics validating the predictability of the methodology. The methodology is validated in two scenarios (with and without modeling the risk of a human-robot collision) and the differences in the lexicons are analyzed.

  1. A data mining methodology for predicting early stage Parkinson’s disease using non-invasive, high-dimensional gait sensor data

    PubMed Central

    Tucker, Conrad; Han, Yixiang; Nembhard, Harriet Black; Lewis, Mechelle; Lee, Wang-Chien; Sterling, Nicholas W; Huang, Xuemei

    2017-01-01

    Parkinson’s disease (PD) is the second most common neurological disorder after Alzheimer’s disease. Key clinical features of PD are motor-related and are typically assessed by healthcare providers based on qualitative visual inspection of a patient’s movement/gait/posture. More advanced diagnostic techniques such as computed tomography scans that measure brain function, can be cost prohibitive and may expose patients to radiation and other harmful effects. To mitigate these challenges, and open a pathway to remote patient-physician assessment, the authors of this work propose a data mining driven methodology that uses low cost, non-invasive sensors to model and predict the presence (or lack therefore) of PD movement abnormalities and model clinical subtypes. The study presented here evaluates the discriminative ability of non-invasive hardware and data mining algorithms to classify PD cases and controls. A 10-fold cross validation approach is used to compare several data mining algorithms in order to determine that which provides the most consistent results when varying the subject gait data. Next, the predictive accuracy of the data mining model is quantified by testing it against unseen data captured from a test pool of subjects. The proposed methodology demonstrates the feasibility of using non-invasive, low cost, hardware and data mining models to monitor the progression of gait features outside of the traditional healthcare facility, which may ultimately lead to earlier diagnosis of emerging neurological diseases. PMID:29541376

  2. Numerical and experimental investigation of a beveled trailing-edge flow field and noise emission

    NASA Astrophysics Data System (ADS)

    van der Velden, W. C. P.; Pröbsting, S.; van Zuijlen, A. H.; de Jong, A. T.; Guan, Y.; Morris, S. C.

    2016-12-01

    Efficient tools and methodology for the prediction of trailing-edge noise experience substantial interest within the wind turbine industry. In recent years, the Lattice Boltzmann Method has received increased attention for providing such an efficient alternative for the numerical solution of complex flow problems. Based on the fully explicit, transient, compressible solution of the Lattice Boltzmann Equation in combination with a Ffowcs-Williams and Hawking aeroacoustic analogy, an estimation of the acoustic radiation in the far field is obtained. To validate this methodology for the prediction of trailing-edge noise, the flow around a flat plate with an asymmetric 25° beveled trailing edge and obtuse corner in a low Mach number flow is analyzed. Flow field dynamics are compared to data obtained experimentally from Particle Image Velocimetry and Hot Wire Anemometry, and compare favorably in terms of mean velocity field and turbulent fluctuations. Moreover, the characteristics of the unsteady surface pressure, which are closely related to the acoustic emission, show good agreement between simulation and experiment. Finally, the prediction of the radiated sound is compared to the results obtained from acoustic phased array measurements in combination with a beamforming methodology. Vortex shedding results in a strong narrowband component centered at a constant Strouhal number in the acoustic spectrum. At higher frequency, a good agreement between simulation and experiment for the broadband noise component is obtained and a typical cardioid-like directivity is recovered.

  3. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes

    PubMed Central

    Zhang, Hong; Pei, Yun

    2016-01-01

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266

  4. A community effort to assess and improve drug sensitivity prediction algorithms

    PubMed Central

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2015-01-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods. PMID:24880487

  5. A community effort to assess and improve drug sensitivity prediction algorithms.

    PubMed

    Costello, James C; Heiser, Laura M; Georgii, Elisabeth; Gönen, Mehmet; Menden, Michael P; Wang, Nicholas J; Bansal, Mukesh; Ammad-ud-din, Muhammad; Hintsanen, Petteri; Khan, Suleiman A; Mpindi, John-Patrick; Kallioniemi, Olli; Honkela, Antti; Aittokallio, Tero; Wennerberg, Krister; Collins, James J; Gallahan, Dan; Singer, Dinah; Saez-Rodriguez, Julio; Kaski, Samuel; Gray, Joe W; Stolovitzky, Gustavo

    2014-12-01

    Predicting the best treatment strategy from genomic information is a core goal of precision medicine. Here we focus on predicting drug response based on a cohort of genomic, epigenomic and proteomic profiling data sets measured in human breast cancer cell lines. Through a collaborative effort between the National Cancer Institute (NCI) and the Dialogue on Reverse Engineering Assessment and Methods (DREAM) project, we analyzed a total of 44 drug sensitivity prediction algorithms. The top-performing approaches modeled nonlinear relationships and incorporated biological pathway information. We found that gene expression microarrays consistently provided the best predictive power of the individual profiling data sets; however, performance was increased by including multiple, independent data sets. We discuss the innovations underlying the top-performing methodology, Bayesian multitask MKL, and we provide detailed descriptions of all methods. This study establishes benchmarks for drug sensitivity prediction and identifies approaches that can be leveraged for the development of new methods.

  6. Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.

    PubMed

    Zhang, Hong; Pei, Yun

    2016-08-12

    Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.

  7. The use of genomic information increases the accuracy of breeding value predictions for sea louse (Caligus rogercresseyi) resistance in Atlantic salmon (Salmo salar).

    PubMed

    Correa, Katharina; Bangera, Rama; Figueroa, René; Lhorente, Jean P; Yáñez, José M

    2017-01-31

    Sea lice infestations caused by Caligus rogercresseyi are a main concern to the salmon farming industry due to associated economic losses. Resistance to this parasite was shown to have low to moderate genetic variation and its genetic architecture was suggested to be polygenic. The aim of this study was to compare accuracies of breeding value predictions obtained with pedigree-based best linear unbiased prediction (P-BLUP) methodology against different genomic prediction approaches: genomic BLUP (G-BLUP), Bayesian Lasso, and Bayes C. To achieve this, 2404 individuals from 118 families were measured for C. rogercresseyi count after a challenge and genotyped using 37 K single nucleotide polymorphisms. Accuracies were assessed using fivefold cross-validation and SNP densities of 0.5, 1, 5, 10, 25 and 37 K. Accuracy of genomic predictions increased with increasing SNP density and was higher than pedigree-based BLUP predictions by up to 22%. Both Bayesian and G-BLUP methods can predict breeding values with higher accuracies than pedigree-based BLUP, however, G-BLUP may be the preferred method because of reduced computation time and ease of implementation. A relatively low marker density (i.e. 10 K) is sufficient for maximal increase in accuracy when using G-BLUP or Bayesian methods for genomic prediction of C. rogercresseyi resistance in Atlantic salmon.

  8. Soft Computing Methods for Disulfide Connectivity Prediction.

    PubMed

    Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S

    2015-01-01

    The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.

  9. Signal peptide discrimination and cleavage site identification using SVM and NN.

    PubMed

    Kazemian, H B; Yusuf, S A; White, K

    2014-02-01

    About 15% of all proteins in a genome contain a signal peptide (SP) sequence, at the N-terminus, that targets the protein to intracellular secretory pathways. Once the protein is targeted correctly in the cell, the SP is cleaved, releasing the mature protein. Accurate prediction of the presence of these short amino-acid SP chains is crucial for modelling the topology of membrane proteins, since SP sequences can be confused with transmembrane domains due to similar composition of hydrophobic amino acids. This paper presents a cascaded Support Vector Machine (SVM)-Neural Network (NN) classification methodology for SP discrimination and cleavage site identification. The proposed method utilises a dual phase classification approach using SVM as a primary classifier to discriminate SP sequences from Non-SP. The methodology further employs NNs to predict the most suitable cleavage site candidates. In phase one, a SVM classification utilises hydrophobic propensities as a primary feature vector extraction using symmetric sliding window amino-acid sequence analysis for discrimination of SP and Non-SP. In phase two, a NN classification uses asymmetric sliding window sequence analysis for prediction of cleavage site identification. The proposed SVM-NN method was tested using Uni-Prot non-redundant datasets of eukaryotic and prokaryotic proteins with SP and Non-SP N-termini. Computer simulation results demonstrate an overall accuracy of 0.90 for SP and Non-SP discrimination based on Matthews Correlation Coefficient (MCC) tests using SVM. For SP cleavage site prediction, the overall accuracy is 91.5% based on cross-validation tests using the novel SVM-NN model. © 2013 Published by Elsevier Ltd.

  10. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Mapping Chemical Selection Pathways for Designing Multicomponent Alloys: an informatics framework for materials design.

    PubMed

    Srinivasan, Srikant; Broderick, Scott R; Zhang, Ruifeng; Mishra, Amrita; Sinnott, Susan B; Saxena, Surendra K; LeBeau, James M; Rajan, Krishna

    2015-12-18

    A data driven methodology is developed for tracking the collective influence of the multiple attributes of alloying elements on both thermodynamic and mechanical properties of metal alloys. Cobalt-based superalloys are used as a template to demonstrate the approach. By mapping the high dimensional nature of the systematics of elemental data embedded in the periodic table into the form of a network graph, one can guide targeted first principles calculations that identify the influence of specific elements on phase stability, crystal structure and elastic properties. This provides a fundamentally new means to rapidly identify new stable alloy chemistries with enhanced high temperature properties. The resulting visualization scheme exhibits the grouping and proximity of elements based on their impact on the properties of intermetallic alloys. Unlike the periodic table however, the distance between neighboring elements uncovers relationships in a complex high dimensional information space that would not have been easily seen otherwise. The predictions of the methodology are found to be consistent with reported experimental and theoretical studies. The informatics based methodology presented in this study can be generalized to a framework for data analysis and knowledge discovery that can be applied to many material systems and recreated for different design objectives.

  12. Prediction of shallow landslide occurrence: Validation of a physically-based approach through a real case study.

    PubMed

    Schilirò, Luca; Montrasio, Lorella; Scarascia Mugnozza, Gabriele

    2016-11-01

    In recent years, physically-based numerical models have frequently been used in the framework of early-warning systems devoted to rainfall-induced landslide hazard monitoring and mitigation. For this reason, in this work we describe the potential of SLIP (Shallow Landslides Instability Prediction), a simplified physically-based model for the analysis of shallow landslide occurrence. In order to test the reliability of this model, a back analysis of recent landslide events occurred in the study area (located SW of Messina, northeastern Sicily, Italy) on October 1st, 2009 was performed. The simulation results have been compared with those obtained for the same event by using TRIGRS, another well-established model for shallow landslide prediction. Afterwards, a simulation over a 2-year span period has been performed for the same area, with the aim of evaluating the performance of SLIP as early warning tool. The results confirm the good predictive capability of the model, both in terms of spatial and temporal prediction of the instability phenomena. For this reason, we recommend an operating procedure for the real-time definition of shallow landslide triggering scenarios at the catchment scale, which is based on the use of SLIP calibrated through a specific multi-methodological approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  14. Kiwifruit fibre level influences the predicted production and absorption of SCFA in the hindgut of growing pigs using a combined in vivo-in vitro digestion methodology.

    PubMed

    Montoya, Carlos A; Rutherfurd, Shane M; Moughan, Paul J

    2016-04-01

    Combined in vivo (ileal cannulated pig) and in vitro (faecal inoculum-based fermentation) digestion methodologies were used to predict the production and absorption of SCFA in the hindgut of growing pigs. Ileal and faecal samples were collected from animals (n 7) fed diets containing either 25 or 50 g/kg DM of kiwifruit fibre from added kiwifruit for 14 d. Ileal and faecal SCFA concentrations normalised for food DM intake (DMI) and nutrient digestibility were determined. Ileal digesta were collected and fermented for 38 h using a fresh pig faecal inoculum to predict SCFA production. The predicted hindgut SCFA production along with the determined ileal and faecal SCFA were then used to predict SCFA absorption in the hindgut and total tract organic matter digestibility. The determined ileal and faecal SCFA concentrations (e.g. 8·5 and 4·4 mmol/kg DMI, respectively, for acetic acid for the low-fibre diet) represented only 0·2-3·2 % of the predicted hindgut SCFA production (e.g. 270 mmol/kg DMI for acetic acid). Predicted production and absorption of acetic, butyric and propionic acids were the highest for the high-fibre diet (P0·05). In conclusion, determined ileal and faecal SCFA concentrations represent only a small fraction of total SCFA production, and may therefore be misleading in relation to the effect of diets on SCFA production and absorption. Considerable quantities of SCFA are produced and absorbed in the hindgut of the pig by the fermentation of kiwifruit.

  15. Modeling methodology for the accurate and prompt prediction of symptomatic events in chronic diseases.

    PubMed

    Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L

    2016-08-01

    Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. A Simulation Based Methodology to Examine the B-1B’s AN/ALQ-161 Maintenance Process

    DTIC Science & Technology

    2010-03-01

    Paper, 2001). “The Air Force does not think of, or advertise , bombers as interchangeable. The B-1, B-2 and B-52 all have a specific mission area and...Path Modeling ( CPM ), Goal Programming, EOQ, Nonlinear Programming Predictive Models unknown, ill-defined known or under the decision-maker’s

  17. A Simulation Based Methodology to Examine the B-1B’s AN/ALQ-161 Maintenance Process

    DTIC Science & Technology

    2010-03-01

    2001). “The Air Force does not think of, or advertise , bombers as interchangeable. The B-1, B-2 and B-52 all have a specific mission area and 2...Modeling ( CPM ), Goal Programming, EOQ, Nonlinear Programming Predictive Models unknown, ill-defined known or under the decision-maker’s control

  18. Predicting US Drought Monitor (USDM) states using precipitation, soil moisture, and evapotranspiration anomalies, Part I: Development of a non-discrete USDM index

    USDA-ARS?s Scientific Manuscript database

    The U.S. Drought Monitor (USDM) classifies drought into five discrete dryness/drought categories based on expert synthesis of numerous data sources. In this study, an empirical methodology is presented for creating a non-discrete U.S. Drought Monitor (USDM) index that simultaneously 1) represents th...

  19. Development of a Physically-Based Methodology for Predicting Material Variability in Fatigue Crack Initiation and Growth Response

    DTIC Science & Technology

    2004-12-01

    64, (2000), Federal Aviation Administration, Washington, DC. 14. Y.T. Wu, M.P. Enright, and H.R. Millwater , "Probabilistic Methods for Design...Assessment of Reliability with Inspection," AIAA Journal, AIAA, 40 (5), (2002), 937-946. 15. M.P. Enright, L. Huyse, R.C. McClung, and H.R. Millwater

  20. Quantum-Mechanics Methodologies in Drug Discovery: Applications of Docking and Scoring in Lead Optimization.

    PubMed

    Crespo, Alejandro; Rodriguez-Granillo, Agustina; Lim, Victoria T

    2017-01-01

    The development and application of quantum mechanics (QM) methodologies in computer- aided drug design have flourished in the last 10 years. Despite the natural advantage of QM methods to predict binding affinities with a higher level of theory than those methods based on molecular mechanics (MM), there are only a few examples where diverse sets of protein-ligand targets have been evaluated simultaneously. In this work, we review recent advances in QM docking and scoring for those cases in which a systematic analysis has been performed. In addition, we introduce and validate a simplified QM/MM expression to compute protein-ligand binding energies. Overall, QMbased scoring functions are generally better to predict ligand affinities than those based on classical mechanics. However, the agreement between experimental activities and calculated binding energies is highly dependent on the specific chemical series considered. The advantage of more accurate QM methods is evident in cases where charge transfer and polarization effects are important, for example when metals are involved in the binding process or when dispersion forces play a significant role as in the case of hydrophobic or stacking interactions. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  1. Clinical decision rules, spinal pain classification and prediction of treatment outcome: A discussion of recent reports in the rehabilitation literature

    PubMed Central

    2012-01-01

    Clinical decision rules are an increasingly common presence in the biomedical literature and represent one strategy of enhancing clinical-decision making with the goal of improving the efficiency and effectiveness of healthcare delivery. In the context of rehabilitation research, clinical decision rules have been predominantly aimed at classifying patients by predicting their treatment response to specific therapies. Traditionally, recommendations for developing clinical decision rules propose a multistep process (derivation, validation, impact analysis) using defined methodology. Research efforts aimed at developing a “diagnosis-based clinical decision rule” have departed from this convention. Recent publications in this line of research have used the modified terminology “diagnosis-based clinical decision guide.” Modifications to terminology and methodology surrounding clinical decision rules can make it more difficult for clinicians to recognize the level of evidence associated with a decision rule and understand how this evidence should be implemented to inform patient care. We provide a brief overview of clinical decision rule development in the context of the rehabilitation literature and two specific papers recently published in Chiropractic and Manual Therapies. PMID:22726639

  2. Metabolic Toxicity Screening Using Electrochemiluminescence Arrays Coupled with Enzyme-DNA Biocolloid Reactors and Liquid Chromatography–Mass Spectrometry

    PubMed Central

    Hvastkovs, Eli G.; Schenkman, John B.; Rusling, James F.

    2012-01-01

    New chemicals or drugs must be guaranteed safe before they can be marketed. Despite widespread use of bioassay panels for toxicity prediction, products that are toxic to a subset of the population often are not identified until clinical trials. This article reviews new array methodologies based on enzyme/DNA films that form and identify DNA-reactive metabolites that are indicators of potentially genotoxic species. This molecularly based methodology is designed in a rapid screening array that utilizes electrochemiluminescence (ECL) to detect metabolite-DNA reactions, as well as biocolloid reactors that provide the DNA adducts and metabolites for liquid chromatography–mass spectrometry (LC-MS) analysis. ECL arrays provide rapid toxicity screening, and the biocolloid reactor LC-MS approach provides a valuable follow-up on structure, identification, and formation rates of DNA adducts for toxicity hits from the ECL array screening. Specific examples using this strategy are discussed. Integration of high-throughput versions of these toxicity-screening methods with existing drug toxicity bioassays should allow for better human toxicity prediction as well as more informed decision making regarding new chemical and drug candidates. PMID:22482786

  3. Interactive Multi-Instrument Database of Solar Flares

    NASA Technical Reports Server (NTRS)

    Ranjan, Shubha S.; Spaulding, Ryan; Deardorff, Donald G.

    2018-01-01

    The fundamental motivation of the project is that the scientific output of solar research can be greatly enhanced by better exploitation of the existing solar/heliosphere space-data products jointly with ground-based observations. Our primary focus is on developing a specific innovative methodology based on recent advances in "big data" intelligent databases applied to the growing amount of high-spatial and multi-wavelength resolution, high-cadence data from NASA's missions and supporting ground-based observatories. Our flare database is not simply a manually searchable time-based catalog of events or list of web links pointing to data. It is a preprocessed metadata repository enabling fast search and automatic identification of all recorded flares sharing a specifiable set of characteristics, features, and parameters. The result is a new and unique database of solar flares and data search and classification tools for the Heliophysics community, enabling multi-instrument/multi-wavelength investigations of flare physics and supporting further development of flare-prediction methodologies.

  4. RNA secondary structure prediction using soft computing.

    PubMed

    Ray, Shubhra Sankar; Pal, Sankar K

    2013-01-01

    Prediction of RNA structure is invaluable in creating new drugs and understanding genetic diseases. Several deterministic algorithms and soft computing-based techniques have been developed for more than a decade to determine the structure from a known RNA sequence. Soft computing gained importance with the need to get approximate solutions for RNA sequences by considering the issues related with kinetic effects, cotranscriptional folding, and estimation of certain energy parameters. A brief description of some of the soft computing-based techniques, developed for RNA secondary structure prediction, is presented along with their relevance. The basic concepts of RNA and its different structural elements like helix, bulge, hairpin loop, internal loop, and multiloop are described. These are followed by different methodologies, employing genetic algorithms, artificial neural networks, and fuzzy logic. The role of various metaheuristics, like simulated annealing, particle swarm optimization, ant colony optimization, and tabu search is also discussed. A relative comparison among different techniques, in predicting 12 known RNA secondary structures, is presented, as an example. Future challenging issues are then mentioned.

  5. Prediction models for Arabica coffee beverage quality based on aroma analyses and chemometrics.

    PubMed

    Ribeiro, J S; Augusto, F; Salva, T J G; Ferreira, M M C

    2012-11-15

    In this work, soft modeling based on chemometric analyses of coffee beverage sensory data and the chromatographic profiles of volatile roasted coffee compounds is proposed to predict the scores of acidity, bitterness, flavor, cleanliness, body, and overall quality of the coffee beverage. A partial least squares (PLS) regression method was used to construct the models. The ordered predictor selection (OPS) algorithm was applied to select the compounds for the regression model of each sensory attribute in order to take only significant chromatographic peaks into account. The prediction errors of these models, using 4 or 5 latent variables, were equal to 0.28, 0.33, 0.35, 0.33, 0.34 and 0.41, for each of the attributes and compatible with the errors of the mean scores of the experts. Thus, the results proved the feasibility of using a similar methodology in on-line or routine applications to predict the sensory quality of Brazilian Arabica coffee. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  7. Characterization of normality of chaotic systems including prediction and detection of anomalies

    NASA Astrophysics Data System (ADS)

    Engler, Joseph John

    Accurate prediction and control pervades domains such as engineering, physics, chemistry, and biology. Often, it is discovered that the systems under consideration cannot be well represented by linear, periodic nor random data. It has been shown that these systems exhibit deterministic chaos behavior. Deterministic chaos describes systems which are governed by deterministic rules but whose data appear to be random or quasi-periodic distributions. Deterministically chaotic systems characteristically exhibit sensitive dependence upon initial conditions manifested through rapid divergence of states initially close to one another. Due to this characterization, it has been deemed impossible to accurately predict future states of these systems for longer time scales. Fortunately, the deterministic nature of these systems allows for accurate short term predictions, given the dynamics of the system are well understood. This fact has been exploited in the research community and has resulted in various algorithms for short term predictions. Detection of normality in deterministically chaotic systems is critical in understanding the system sufficiently to able to predict future states. Due to the sensitivity to initial conditions, the detection of normal operational states for a deterministically chaotic system can be challenging. The addition of small perturbations to the system, which may result in bifurcation of the normal states, further complicates the problem. The detection of anomalies and prediction of future states of the chaotic system allows for greater understanding of these systems. The goal of this research is to produce methodologies for determining states of normality for deterministically chaotic systems, detection of anomalous behavior, and the more accurate prediction of future states of the system. Additionally, the ability to detect subtle system state changes is discussed. The dissertation addresses these goals by proposing new representational techniques and novel prediction methodologies. The value and efficiency of these methods are explored in various case studies. Presented is an overview of chaotic systems with examples taken from the real world. A representation schema for rapid understanding of the various states of deterministically chaotic systems is presented. This schema is then used to detect anomalies and system state changes. Additionally, a novel prediction methodology which utilizes Lyapunov exponents to facilitate longer term prediction accuracy is presented and compared with other nonlinear prediction methodologies. These novel methodologies are then demonstrated on applications such as wind energy, cyber security and classification of social networks.

  8. Mortality prediction using TRISS methodology in the Spanish ICU Trauma Registry (RETRAUCI).

    PubMed

    Chico-Fernández, M; Llompart-Pou, J A; Sánchez-Casado, M; Alberdi-Odriozola, F; Guerrero-López, F; Mayor-García, M D; Egea-Guerrero, J J; Fernández-Ortega, J F; Bueno-González, A; González-Robledo, J; Servià-Goixart, L; Roldán-Ramírez, J; Ballesteros-Sanz, M Á; Tejerina-Alvarez, E; Pino-Sánchez, F I; Homar-Ramírez, J

    2016-10-01

    To validate Trauma and Injury Severity Score (TRISS) methodology as an auditing tool in the Spanish ICU Trauma Registry (RETRAUCI). A prospective, multicenter registry evaluation was carried out. Thirteen Spanish Intensive Care Units (ICUs). Individuals with traumatic disease and available data admitted to the participating ICUs. Predicted mortality using TRISS methodology was compared with that observed in the pilot phase of the RETRAUCI from November 2012 to January 2015. Discrimination was evaluated using receiver operating characteristic (ROC) curves and the corresponding areas under the curves (AUCs) (95% CI), with calibration using the Hosmer-Lemeshow (HL) goodness-of-fit test. A value of p<0.05 was considered significant. Predicted and observed mortality. A total of 1405 patients were analyzed. The observed mortality rate was 18% (253 patients), while the predicted mortality rate was 16.9%. The area under the ROC curve was 0.889 (95% CI: 0.867-0.911). Patients with blunt trauma (n=1305) had an area under the ROC curve of 0.887 (95% CI: 0.864-0.910), and those with penetrating trauma (n=100) presented an area under the curve of 0.919 (95% CI: 0.859-0.979). In the global sample, the HL test yielded a value of 25.38 (p=0.001): 27.35 (p<0.0001) in blunt trauma and 5.91 (p=0.658) in penetrating trauma. TRISS methodology underestimated mortality in patients with low predicted mortality and overestimated mortality in patients with high predicted mortality. TRISS methodology in the evaluation of severe trauma in Spanish ICUs showed good discrimination, with inadequate calibration - particularly in blunt trauma. Copyright © 2015 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  9. An Ensemble-Based Protocol for the Computational Prediction of Helix-Helix Interactions in G Protein-Coupled Receptors using Coarse-Grained Molecular Dynamics.

    PubMed

    Altwaijry, Nojood A; Baron, Michael; Wright, David W; Coveney, Peter V; Townsend-Nicholson, Andrea

    2017-05-09

    The accurate identification of the specific points of interaction between G protein-coupled receptor (GPCR) oligomers is essential for the design of receptor ligands targeting oligomeric receptor targets. A coarse-grained molecular dynamics computer simulation approach would provide a compelling means of identifying these specific protein-protein interactions and could be applied both for known oligomers of interest and as a high-throughput screen to identify novel oligomeric targets. However, to be effective, this in silico modeling must provide accurate, precise, and reproducible information. This has been achieved recently in numerous biological systems using an ensemble-based all-atom molecular dynamics approach. In this study, we describe an equivalent methodology for ensemble-based coarse-grained simulations. We report the performance of this method when applied to four different GPCRs known to oligomerize using error analysis to determine the ensemble size and individual replica simulation time required. Our measurements of distance between residues shown to be involved in oligomerization of the fifth transmembrane domain from the adenosine A 2A receptor are in very good agreement with the existing biophysical data and provide information about the nature of the contact interface that cannot be determined experimentally. Calculations of distance between rhodopsin, CXCR4, and β 1 AR transmembrane domains reported to form contact points in homodimers correlate well with the corresponding measurements obtained from experimental structural data, providing an ability to predict contact interfaces computationally. Interestingly, error analysis enables identification of noninteracting regions. Our results confirm that GPCR interactions can be reliably predicted using this novel methodology.

  10. Wheel life prediction model - an alternative to the FASTSIM algorithm for RCF

    NASA Astrophysics Data System (ADS)

    Hossein-Nia, Saeed; Sichani, Matin Sh.; Stichel, Sebastian; Casanueva, Carlos

    2018-07-01

    In this article, a wheel life prediction model considering wear and rolling contact fatigue (RCF) is developed and applied to a heavy-haul locomotive. For wear calculations, a methodology based on Archard's wear calculation theory is used. The simulated wear depth is compared with profile measurements within 100,000 km. For RCF, a shakedown-based theory is applied locally, using the FaStrip algorithm to estimate the tangential stresses instead of FASTSIM. The differences between the two algorithms on damage prediction models are studied. The running distance between the two reprofiling due to RCF is estimated based on a Wöhler-like relationship developed from laboratory test results from the literature and the Palmgren-Miner rule. The simulated crack locations and their angles are compared with a five-year field study. Calculations to study the effects of electro-dynamic braking, track gauge, harder wheel material and the increase of axle load on the wheel life are also carried out.

  11. Assessment of Microphysical Models in the National Combustion Code (NCC) for Aircraft Particulate Emissions: Particle Loss in Sampling Lines

    NASA Technical Reports Server (NTRS)

    Wey, Thomas; Liu, Nan-Suey

    2008-01-01

    This paper at first describes the fluid network approach recently implemented into the National Combustion Code (NCC) for the simulation of transport of aerosols (volatile particles and soot) in the particulate sampling systems. This network-based approach complements the other two approaches already in the NCC, namely, the lower-order temporal approach and the CFD-based approach. The accuracy and the computational costs of these three approaches are then investigated in terms of their application to the prediction of particle losses through sample transmission and distribution lines. Their predictive capabilities are assessed by comparing the computed results with the experimental data. The present work will help establish standard methodologies for measuring the size and concentration of particles in high-temperature, high-velocity jet engine exhaust. Furthermore, the present work also represents the first step of a long term effort of validating physics-based tools for the prediction of aircraft particulate emissions.

  12. Analysis of the bond-valence method for calculating (29) Si and (31) P magnetic shielding in covalent network solids.

    PubMed

    Holmes, Sean T; Alkan, Fahri; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil

    2016-07-05

    (29) Si and (31) P magnetic-shielding tensors in covalent network solids have been evaluated using periodic and cluster-based calculations. The cluster-based computational methodology employs pseudoatoms to reduce the net charge (resulting from missing co-ordination on the terminal atoms) through valence modification of terminal atoms using bond-valence theory (VMTA/BV). The magnetic-shielding tensors computed with the VMTA/BV method are compared to magnetic-shielding tensors determined with the periodic GIPAW approach. The cluster-based all-electron calculations agree with experiment better than the GIPAW calculations, particularly for predicting absolute magnetic shielding and for predicting chemical shifts. The performance of the DFT functionals CA-PZ, PW91, PBE, rPBE, PBEsol, WC, and PBE0 are assessed for the prediction of (29) Si and (31) P magnetic-shielding constants. Calculations using the hybrid functional PBE0, in combination with the VMTA/BV approach, result in excellent agreement with experiment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. A System Computational Model of Implicit Emotional Learning

    PubMed Central

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation. PMID:27378898

  14. A System Computational Model of Implicit Emotional Learning.

    PubMed

    Puviani, Luca; Rama, Sidita

    2016-01-01

    Nowadays, the experimental study of emotional learning is commonly based on classical conditioning paradigms and models, which have been thoroughly investigated in the last century. Unluckily, models based on classical conditioning are unable to explain or predict important psychophysiological phenomena, such as the failure of the extinction of emotional responses in certain circumstances (for instance, those observed in evaluative conditioning, in post-traumatic stress disorders and in panic attacks). In this manuscript, starting from the experimental results available from the literature, a computational model of implicit emotional learning based both on prediction errors computation and on statistical inference is developed. The model quantitatively predicts (a) the occurrence of evaluative conditioning, (b) the dynamics and the resistance-to-extinction of the traumatic emotional responses, (c) the mathematical relation between classical conditioning and unconditioned stimulus revaluation. Moreover, we discuss how the derived computational model can lead to the development of new animal models for resistant-to-extinction emotional reactions and novel methodologies of emotions modulation.

  15. Analysis and Design of Fuselage Structures Including Residual Strength Prediction Methodology

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.

    1998-01-01

    The goal of this research project is to develop and assess methodologies for the design and analysis of fuselage structures accounting for residual strength. Two primary objectives are included in this research activity: development of structural analysis methodology for predicting residual strength of fuselage shell-type structures; and the development of accurate, efficient analysis, design and optimization tool for fuselage shell structures. Assessment of these tools for robustness, efficient, and usage in a fuselage shell design environment will be integrated with these two primary research objectives.

  16. Methodology to estimate particulate matter emissions from certified commercial aircraft engines.

    PubMed

    Wayson, Roger L; Fleming, Gregg G; Lovinelli, Ralph

    2009-01-01

    Today, about one-fourth of U.S. commercial service airports, including 41 of the busiest 50, are either in nonattainment or maintenance areas per the National Ambient Air Quality Standards. U.S. aviation activity is forecasted to triple by 2025, while at the same time, the U.S. Environmental Protection Agency (EPA) is evaluating stricter particulate matter (PM) standards on the basis of documented human health and welfare impacts. Stricter federal standards are expected to impede capacity and limit aviation growth if regulatory mandated emission reductions occur as for other non-aviation sources (i.e., automobiles, power plants, etc.). In addition, strong interest exists as to the role aviation emissions play in air quality and climate change issues. These reasons underpin the need to quantify and understand PM emissions from certified commercial aircraft engines, which has led to the need for a methodology to predict these emissions. Standardized sampling techniques to measure volatile and nonvolatile PM emissions from aircraft engines do not exist. As such, a first-order approximation (FOA) was derived to fill this need based on available information. FOA1.0 only allowed prediction of nonvolatile PM. FOA2.0 was a change to include volatile PM emissions on the basis of the ratio of nonvolatile to volatile emissions. Recent collaborative efforts by industry (manufacturers and airlines), research establishments, and regulators have begun to provide further insight into the estimation of the PM emissions. The resultant PM measurement datasets are being analyzed to refine sampling techniques and progress towards standardized PM measurements. These preliminary measurement datasets also support the continued refinement of the FOA methodology. FOA3.0 disaggregated the prediction techniques to allow for independent prediction of nonvolatile and volatile emissions on a more theoretical basis. The Committee for Aviation Environmental Protection of the International Civil Aviation Organization endorsed the use of FOA3.0 in February 2007. Further commitment was made to improve the FOA as new data become available, until such time the methodology is rendered obsolete by a fully validated database of PM emission indices for today's certified commercial fleet. This paper discusses related assumptions and derived equations for the FOA3.0 methodology used worldwide to estimate PM emissions from certified commercial aircraft engines within the vicinity of airports.

  17. Social Sentiment Sensor in Twitter for Predicting Cyber-Attacks Using ℓ1 Regularization

    PubMed Central

    Sanchez-Perez, Gabriel; Toscano-Medina, Karina; Martinez-Hernandez, Victor; Olivares-Mercado, Jesus; Sanchez, Victor

    2018-01-01

    In recent years, online social media information has been the subject of study in several data science fields due to its impact on users as a communication and expression channel. Data gathered from online platforms such as Twitter has the potential to facilitate research over social phenomena based on sentiment analysis, which usually employs Natural Language Processing and Machine Learning techniques to interpret sentimental tendencies related to users’ opinions and make predictions about real events. Cyber-attacks are not isolated from opinion subjectivity on online social networks. Various security attacks are performed by hacker activists motivated by reactions from polemic social events. In this paper, a methodology for tracking social data that can trigger cyber-attacks is developed. Our main contribution lies in the monthly prediction of tweets with content related to security attacks and the incidents detected based on ℓ1 regularization. PMID:29710833

  18. Prediction of Composite Laminate Strength Properties Using a Refined Zigzag Plate Element

    NASA Technical Reports Server (NTRS)

    Barut, Atila; Madenci, Erdogan; Tessler, Alexander

    2013-01-01

    This study presents an approach that uses the refined zigzag element, RZE(exp2,2) in conjunction with progressive failure criteria to predict the ultimate strength of composite laminates based on only ply-level strength properties. The methodology involves four major steps: (1) Determination of accurate stress and strain fields under complex loading conditions using RZE(exp2,2)-based finite element analysis, (2) Determination of failure locations and failure modes using the commonly accepted Hashin's failure criteria, (3) Recursive degradation of the material stiffness, and (4) Non-linear incremental finite element analysis to obtain stress redistribution until global failure. The validity of this approach is established by considering the published test data and predictions for (1) strength of laminates under various off-axis loading, (2) strength of laminates with a hole under compression, and (3) strength of laminates with a hole under tension.

  19. A data-based model to locate mass movements triggered by seismic events in Sichuan, China.

    PubMed

    de Souza, Fabio Teodoro

    2014-01-01

    Earthquakes affect the entire world and have catastrophic consequences. On May 12, 2008, an earthquake of magnitude 7.9 on the Richter scale occurred in the Wenchuan area of Sichuan province in China. This event, together with subsequent aftershocks, caused many avalanches, landslides, debris flows, collapses, and quake lakes and induced numerous unstable slopes. This work proposes a methodology that uses a data mining approach and geographic information systems to predict these mass movements based on their association with the main and aftershock epicenters, geologic faults, riverbeds, and topography. A dataset comprising 3,883 mass movements is analyzed, and some models to predict the location of these mass movements are developed. These predictive models could be used by the Chinese authorities as an important tool for identifying risk areas and rescuing survivors during similar events in the future.

  20. The effects of physical aging at elevated temperatures on the viscoelastic creep on IM7/K3B

    NASA Technical Reports Server (NTRS)

    Gates, Thomas S.; Feldman, Mark

    1994-01-01

    Physical aging at elevated temperature of the advanced composite IM7/K3B was investigated through the use of creep compliance tests. Testing consisted of short term isothermal, creep/recovery with the creep segments performed at constant load. The matrix dominated transverse tensile and in-plane shear behavior were measured at temperatures ranging from 200 to 230 C. Through the use of time based shifting procedures, the aging shift factors, shift rates and momentary master curve parameters were found at each temperature. These material parameters were used as input to a predictive methodology, which was based upon effective time theory and linear viscoelasticity combined with classical lamination theory. Long term creep compliance test data was compared to predictions to verify the method. The model was then used to predict the long term creep behavior for several general laminates.

  1. Social Sentiment Sensor in Twitter for Predicting Cyber-Attacks Using ℓ₁ Regularization.

    PubMed

    Hernandez-Suarez, Aldo; Sanchez-Perez, Gabriel; Toscano-Medina, Karina; Martinez-Hernandez, Victor; Perez-Meana, Hector; Olivares-Mercado, Jesus; Sanchez, Victor

    2018-04-29

    In recent years, online social media information has been the subject of study in several data science fields due to its impact on users as a communication and expression channel. Data gathered from online platforms such as Twitter has the potential to facilitate research over social phenomena based on sentiment analysis, which usually employs Natural Language Processing and Machine Learning techniques to interpret sentimental tendencies related to users’ opinions and make predictions about real events. Cyber-attacks are not isolated from opinion subjectivity on online social networks. Various security attacks are performed by hacker activists motivated by reactions from polemic social events. In this paper, a methodology for tracking social data that can trigger cyber-attacks is developed. Our main contribution lies in the monthly prediction of tweets with content related to security attacks and the incidents detected based on ℓ 1 regularization.

  2. Sonic Boom Mitigation Through Aircraft Design and Adjoint Methodology

    NASA Technical Reports Server (NTRS)

    Rallabhandi, Siriam K.; Diskin, Boris; Nielsen, Eric J.

    2012-01-01

    This paper presents a novel approach to design of the supersonic aircraft outer mold line (OML) by optimizing the A-weighted loudness of sonic boom signature predicted on the ground. The optimization process uses the sensitivity information obtained by coupling the discrete adjoint formulations for the augmented Burgers Equation and Computational Fluid Dynamics (CFD) equations. This coupled formulation links the loudness of the ground boom signature to the aircraft geometry thus allowing efficient shape optimization for the purpose of minimizing the impact of loudness. The accuracy of the adjoint-based sensitivities is verified against sensitivities obtained using an independent complex-variable approach. The adjoint based optimization methodology is applied to a configuration previously optimized using alternative state of the art optimization methods and produces additional loudness reduction. The results of the optimizations are reported and discussed.

  3. Learning physical descriptors for materials science by compressed sensing

    NASA Astrophysics Data System (ADS)

    Ghiringhelli, Luca M.; Vybiral, Jan; Ahmetcik, Emre; Ouyang, Runhai; Levchenko, Sergey V.; Draxl, Claudia; Scheffler, Matthias

    2017-02-01

    The availability of big data in materials science offers new routes for analyzing materials properties and functions and achieving scientific understanding. Finding structure in these data that is not directly visible by standard tools and exploitation of the scientific information requires new and dedicated methodology based on approaches from statistical learning, compressed sensing, and other recent methods from applied mathematics, computer science, statistics, signal processing, and information science. In this paper, we explain and demonstrate a compressed-sensing based methodology for feature selection, specifically for discovering physical descriptors, i.e., physical parameters that describe the material and its properties of interest, and associated equations that explicitly and quantitatively describe those relevant properties. As showcase application and proof of concept, we describe how to build a physical model for the quantitative prediction of the crystal structure of binary compound semiconductors.

  4. Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions

    NASA Technical Reports Server (NTRS)

    Balmes, Etienne

    1993-01-01

    An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.

  5. An automated construction of error models for uncertainty quantification and model calibration

    NASA Astrophysics Data System (ADS)

    Josset, L.; Lunati, I.

    2015-12-01

    To reduce the computational cost of stochastic predictions, it is common practice to rely on approximate flow solvers (or «proxy»), which provide an inexact, but computationally inexpensive response [1,2]. Error models can be constructed to correct the proxy response: based on a learning set of realizations for which both exact and proxy simulations are performed, a transformation is sought to map proxy into exact responses. Once the error model is constructed a prediction of the exact response is obtained at the cost of a proxy simulation for any new realization. Despite its effectiveness [2,3], the methodology relies on several user-defined parameters, which impact the accuracy of the predictions. To achieve a fully automated construction, we propose a novel methodology based on an iterative scheme: we first initialize the error model with a small training set of realizations; then, at each iteration, we add a new realization both to improve the model and to evaluate its performance. More specifically, at each iteration we use the responses predicted by the updated model to identify the realizations that need to be considered to compute the quantity of interest. Another user-defined parameter is the number of dimensions of the response spaces between which the mapping is sought. To identify the space dimensions that optimally balance mapping accuracy and risk of overfitting, we follow a Leave-One-Out Cross Validation. Also, the definition of a stopping criterion is central to an automated construction. We use a stability measure based on bootstrap techniques to stop the iterative procedure when the iterative model has converged. The methodology is illustrated with two test cases in which an inverse problem has to be solved and assess the performance of the method. We show that an iterative scheme is crucial to increase the applicability of the approach. [1] Josset, L., and I. Lunati, Local and global error models for improving uncertainty quantification, Math.ematical Geosciences, 2013 [2] Josset, L., D. Ginsbourger, and I. Lunati, Functional Error Modeling for uncertainty quantification in hydrogeology, Water Resources Research, 2015 [3] Josset, L., V. Demyanov, A.H. Elsheikhb, and I. Lunati, Accelerating Monte Carlo Markov chains with proxy and error models, Computer & Geosciences, 2015 (In press)

  6. Reduce Manual Curation by Combining Gene Predictions from Multiple Annotation Engines, a Case Study of Start Codon Prediction

    PubMed Central

    Ederveen, Thomas H. A.; Overmars, Lex; van Hijum, Sacha A. F. T.

    2013-01-01

    Nowadays, prokaryotic genomes are sequenced faster than the capacity to manually curate gene annotations. Automated genome annotation engines provide users a straight-forward and complete solution for predicting ORF coordinates and function. For many labs, the use of AGEs is therefore essential to decrease the time necessary for annotating a given prokaryotic genome. However, it is not uncommon for AGEs to provide different and sometimes conflicting predictions. Combining multiple AGEs might allow for more accurate predictions. Here we analyzed the ab initio open reading frame (ORF) calling performance of different AGEs based on curated genome annotations of eight strains from different bacterial species with GC% ranging from 35–52%. We present a case study which demonstrates a novel way of comparative genome annotation, using combinations of AGEs in a pre-defined order (or path) to predict ORF start codons. The order of AGE combinations is from high to low specificity, where the specificity is based on the eight genome annotations. For each AGE combination we are able to derive a so-called projected confidence value, which is the average specificity of ORF start codon prediction based on the eight genomes. The projected confidence enables estimating likeliness of a correct prediction for a particular ORF start codon by a particular AGE combination, pinpointing ORFs notoriously difficult to predict start codons. We correctly predict start codons for 90.5±4.8% of the genes in a genome (based on the eight genomes) with an accuracy of 81.1±7.6%. Our consensus-path methodology allows a marked improvement over majority voting (9.7±4.4%) and with an optimal path ORF start prediction sensitivity is gained while maintaining a high specificity. PMID:23675487

  7. Pitfalls and Precautions When Using Predicted Failure Data for Quantitative Analysis of Safety Risk for Human Rated Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Hatfield, Glen S.; Hark, Frank; Stott, James

    2016-01-01

    Launch vehicle reliability analysis is largely dependent upon using predicted failure rates from data sources such as MIL-HDBK-217F. Reliability prediction methodologies based on component data do not take into account system integration risks such as those attributable to manufacturing and assembly. These sources often dominate component level risk. While consequence of failure is often understood, using predicted values in a risk model to estimate the probability of occurrence may underestimate the actual risk. Managers and decision makers use the probability of occurrence to influence the determination whether to accept the risk or require a design modification. The actual risk threshold for acceptance may not be fully understood due to the absence of system level test data or operational data. This paper will establish a method and approach to identify the pitfalls and precautions of accepting risk based solely upon predicted failure data. This approach will provide a set of guidelines that may be useful to arrive at a more realistic quantification of risk prior to acceptance by a program.

  8. G-LoSA for Prediction of Protein-Ligand Binding Sites and Structures.

    PubMed

    Lee, Hui Sun; Im, Wonpil

    2017-01-01

    Recent advances in high-throughput structure determination and computational protein structure prediction have significantly enriched the universe of protein structure. However, there is still a large gap between the number of available protein structures and that of proteins with annotated function in high accuracy. Computational structure-based protein function prediction has emerged to reduce this knowledge gap. The identification of a ligand binding site and its structure is critical to the determination of a protein's molecular function. We present a computational methodology for predicting small molecule ligand binding site and ligand structure using G-LoSA, our protein local structure alignment and similarity measurement tool. All the computational procedures described here can be easily implemented using G-LoSA Toolkit, a package of standalone software programs and preprocessed PDB structure libraries. G-LoSA and G-LoSA Toolkit are freely available to academic users at http://compbio.lehigh.edu/GLoSA . We also illustrate a case study to show the potential of our template-based approach harnessing G-LoSA for protein function prediction.

  9. Human-based approaches to pharmacology and cardiology: an interdisciplinary and intersectorial workshop.

    PubMed

    Rodriguez, Blanca; Carusi, Annamaria; Abi-Gerges, Najah; Ariga, Rina; Britton, Oliver; Bub, Gil; Bueno-Orovio, Alfonso; Burton, Rebecca A B; Carapella, Valentina; Cardone-Noott, Louie; Daniels, Matthew J; Davies, Mark R; Dutta, Sara; Ghetti, Andre; Grau, Vicente; Harmer, Stephen; Kopljar, Ivan; Lambiase, Pier; Lu, Hua Rong; Lyon, Aurore; Minchole, Ana; Muszkiewicz, Anna; Oster, Julien; Paci, Michelangelo; Passini, Elisa; Severi, Stefano; Taggart, Peter; Tinker, Andy; Valentin, Jean-Pierre; Varro, Andras; Wallman, Mikael; Zhou, Xin

    2016-09-01

    Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.

  10. Recent activities within the Aeroservoelasticity Branch at the NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Noll, Thomas E.; Perry, Boyd, III; Gilbert, Michael G.

    1989-01-01

    The objective of research in aeroservoelasticity at the NASA Langley Research Center is to enhance the modeling, analysis, and multidisciplinary design methodologies for obtaining multifunction digital control systems for application to flexible flight vehicles. Recent accomplishments are discussed, and a status report on current activities within the Aeroservoelasticity Branch is presented. In the area of modeling, improvements to the Minimum-State Method of approximating unsteady aerodynamics are shown to provide precise, low-order aeroservoelastic models for design and simulation activities. Analytical methods based on Matched Filter Theory and Random Process Theory to provide efficient and direct predictions of the critical gust profile and the time-correlated gust loads for linear structural design considerations are also discussed. Two research projects leading towards improved design methodology are summarized. The first program is developing an integrated structure/control design capability based on hierarchical problem decomposition, multilevel optimization and analytical sensitivities. The second program provides procedures for obtaining low-order, robust digital control laws for aeroelastic applications. In terms of methodology validation and application the current activities associated with the Active Flexible Wing project are reviewed.

  11. A novel tree-based algorithm to discover seismic patterns in earthquake catalogs

    NASA Astrophysics Data System (ADS)

    Florido, E.; Asencio-Cortés, G.; Aznarte, J. L.; Rubio-Escudero, C.; Martínez-Álvarez, F.

    2018-06-01

    A novel methodology is introduced in this research study to detect seismic precursors. Based on an existing approach, the new methodology searches for patterns in the historical data. Such patterns may contain statistical or soil dynamics information. It improves the original version in several aspects. First, new seismicity indicators have been used to characterize earthquakes. Second, a machine learning clustering algorithm has been applied in a very flexible way, thus allowing the discovery of new data groupings. Third, a novel search strategy is proposed in order to obtain non-overlapped patterns. And, fourth, arbitrary lengths of patterns are searched for, thus discovering long and short-term behaviors that may influence in the occurrence of medium-large earthquakes. The methodology has been applied to seven different datasets, from three different regions, namely the Iberian Peninsula, Chile and Japan. Reported results show a remarkable improvement with respect to the former version, in terms of all evaluated quality measures. In particular, the number of false positives has decreased and the positive predictive values increased, both of them in a very remarkable manner.

  12. Computational Analyses in Support of Sub-scale Diffuser Testing for the A-3 Facility. Part 3; Aero-Acoustic Analyses and Experimental Validation

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.; Graham, Jason S.; McVay, Greg P.; Langford, Lester L.

    2008-01-01

    A unique assessment of acoustic similarity scaling laws and acoustic analogy methodologies in predicting the far-field acoustic signature from a sub-scale altitude rocket test facility at the NASA Stennis Space Center was performed. A directional, point-source similarity analysis was implemented for predicting the acoustic far-field. In this approach, experimental acoustic data obtained from "similar" rocket engine tests were appropriately scaled using key geometric and dynamic parameters. The accuracy of this engineering-level method is discussed by comparing the predictions with acoustic far-field measurements obtained. In addition, a CFD solver was coupled with a Lilley's acoustic analogy formulation to determine the improvement of using a physics-based methodology over an experimental correlation approach. In the current work, steady-state Reynolds-averaged Navier-Stokes calculations were used to model the internal flow of the rocket engine and altitude diffuser. These internal flow simulations provided the necessary realistic input conditions for external plume simulations. The CFD plume simulations were then used to provide the spatial turbulent noise source distributions in the acoustic analogy calculations. Preliminary findings of these studies will be discussed.

  13. Parallel methodology to capture cyclic variability in motored engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ameen, Muhsin M.; Yang, Xiaofeng; Kuo, Tang-Wei

    2016-07-28

    Numerical prediction of of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are require to accurately capture the in-cylinder turbulent flowfield, and (ii) CCV is experienced over long timescales and hence the simulations need to be performed for hundreds of consecutive cycles. In this study, a new methodology is proposed to dissociate this long time-scale problem into several shorter time-scale problems, which can considerably reduce the computational time without sacrificing the fidelity of the simulations. The strategy is to perform multiple single-cycle simulations in parallel bymore » effectively perturbing the simulation parameters such as the initial and boundary conditions. It is shown that by perturbing the initial velocity field effectively based on the intensity of the in-cylinder turbulence, the mean and variance of the in-cylinder flowfield is captured reasonably well. Adding perturbations in the initial pressure field and the boundary pressure improves the predictions. It is shown that this new approach is able to give accurate predictions of the flowfield statistics in less than one-tenth of time required for the conventional approach of simulating consecutive engine cycles.« less

  14. Prediction of enzyme activity with neural network models based on electronic and geometrical features of substrates.

    PubMed

    Szaleniec, Maciej

    2012-01-01

    Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.

  15. Structured prediction models for RNN based sequence labeling in clinical text.

    PubMed

    Jagannatha, Abhyuday N; Yu, Hong

    2016-11-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies for structured prediction in order to improve the exact phrase detection of various medical entities.

  16. Structured prediction models for RNN based sequence labeling in clinical text

    PubMed Central

    Jagannatha, Abhyuday N; Yu, Hong

    2016-01-01

    Sequence labeling is a widely used method for named entity recognition and information extraction from unstructured natural language data. In clinical domain one major application of sequence labeling involves extraction of medical entities such as medication, indication, and side-effects from Electronic Health Record narratives. Sequence labeling in this domain, presents its own set of challenges and objectives. In this work we experimented with various CRF based structured learning models with Recurrent Neural Networks. We extend the previously studied LSTM-CRF models with explicit modeling of pairwise potentials. We also propose an approximate version of skip-chain CRF inference with RNN potentials. We use these methodologies1 for structured prediction in order to improve the exact phrase detection of various medical entities. PMID:28004040

  17. A Theory-Based Methodology for Analyzing Domain Suitability for Expert Systems Technology Applications

    DTIC Science & Technology

    1989-06-01

    amount of data now available as a result of these experiments, we are still unable to explain the "why" or " how " of successful systems or to predict for...order to better understand the potential contribution of this research, it is important to first discuss how expert systems are developed and the...task performance through internal and external feedback. 9. Knowing how to act upon the feedback received. 10. Implementing the action based on the

  18. Revisiting Rossion and Pourtois with new ratings for automated complexity, familiarity, beauty, and encounter.

    PubMed

    Forsythe, Alex; Street, Nichola; Helmy, Mai

    2017-08-01

    Differences between norm ratings collected when participants are asked to consider more than one picture characteristic are contrasted with the traditional methodological approaches of collecting ratings separately for image constructs. We present data that suggest that reporting normative data, based on methodological procedures that ask participants to consider multiple image constructs simultaneously, could potentially confounded norm data. We provide data for two new image constructs, beauty and the extent to which participants encountered the stimuli in their everyday lives. Analysis of this data suggests that familiarity and encounter are tapping different image constructs. The extent to which an observer encounters an object predicts human judgments of visual complexity. Encountering an image was also found to be an important predictor of beauty, but familiarity with that image was not. Taken together, these results suggest that continuing to collect complexity measures from human judgments is a pointless exercise. Automated measures are more reliable and valid measures, which are demonstrated here as predicting human preferences.

  19. A Fatigue Life Prediction Model of Welded Joints under Combined Cyclic Loading

    NASA Astrophysics Data System (ADS)

    Goes, Keurrie C.; Camarao, Arnaldo F.; Pereira, Marcos Venicius S.; Ferreira Batalha, Gilmar

    2011-01-01

    A practical and robust methodology is developed to evaluate the fatigue life in seam welded joints when subjected to combined cyclic loading. The fatigue analysis was conducted in virtual environment. The FE stress results from each loading were imported to fatigue code FE-Fatigue and combined to perform the fatigue life prediction using the S x N (stress x life) method. The measurement or modelling of the residual stresses resulting from the welded process is not part of this work. However, the thermal and metallurgical effects, such as distortions and residual stresses, were considered indirectly through fatigue curves corrections in the samples investigated. A tube-plate specimen was submitted to combined cyclic loading (bending and torsion) with constant amplitude. The virtual durability analysis result was calibrated based on these laboratory tests and design codes such as BS7608 and Eurocode 3. The feasibility and application of the proposed numerical-experimental methodology and contributions for the technical development are discussed. Major challenges associated with this modelling and improvement proposals are finally presented.

  20. Imprecise (fuzzy) information in geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bardossy, A.; Bogardi, I.; Kelly, W.E.

    1988-05-01

    A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less

  1. Simulation of thermomechanical fatigue in solder joints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, H.E.; Porter, V.L.; Fye, R.M.

    1997-12-31

    Thermomechanical fatigue (TMF) is a very complex phenomenon in electronic component systems and has been identified as one prominent degradation mechanism for surface mount solder joints in the stockpile. In order to precisely predict the TMF-related effects on the reliability of electronic components in weapons, a multi-level simulation methodology is being developed at Sandia National Laboratories. This methodology links simulation codes of continuum mechanics (JAS3D), microstructural mechanics (GLAD), and microstructural evolution (PARGRAIN) to treat the disparate length scales that exist between the macroscopic response of the component and the microstructural changes occurring in its constituent materials. JAS3D is used tomore » predict strain/temperature distributions in the component due to environmental variable fluctuations. GLAD identifies damage initiation and accumulation in detail based on the spatial information provided by JAS3D. PARGRAIN simulates the changes of material microstructure, such as the heterogeneous coarsening in Sn-Pb solder, when the component`s service environment varies.« less

  2. Uncertainty Representation and Interpretation in Model-Based Prognostics Algorithms Based on Kalman Filter Estimation

    NASA Technical Reports Server (NTRS)

    Galvan, Jose Ramon; Saxena, Abhinav; Goebel, Kai Frank

    2012-01-01

    This article discusses several aspects of uncertainty representation and management for model-based prognostics methodologies based on our experience with Kalman Filters when applied to prognostics for electronics components. In particular, it explores the implications of modeling remaining useful life prediction as a stochastic process, and how it relates to uncertainty representation, management and the role of prognostics in decision-making. A distinction between the interpretations of estimated remaining useful life probability density function is explained and a cautionary argument is provided against mixing interpretations for two while considering prognostics in making critical decisions.

  3. Analysis of Aerospike Plume Induced Base-Heating Environment

    NASA Technical Reports Server (NTRS)

    Wang, Ten-See

    1998-01-01

    Computational analysis is conducted to study the effect of an aerospike engine plume on X-33 base-heating environment during ascent flight. To properly account for the effect of forebody and aftbody flowfield such as shocks and to allow for potential plume-induced flow-separation, thermo-flowfield of trajectory points is computed. The computational methodology is based on a three-dimensional finite-difference, viscous flow, chemically reacting, pressure-base computational fluid dynamics formulation, and a three-dimensional, finite-volume, spectral-line based weighted-sum-of-gray-gases radiation absorption model computational heat transfer formulation. The predicted convective and radiative base-heat fluxes are presented.

  4. Best Practices for Mudweight Window Generation and Accuracy Assessment between Seismic Based Pore Pressure Prediction Methodologies for a Near-Salt Field in Mississippi Canyon, Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Mannon, Timothy Patrick, Jr.

    Improving well design has and always will be the primary goal in drilling operations in the oil and gas industry. Oil and gas plays are continuing to move into increasingly hostile drilling environments, including near and/or sub-salt proximities. The ability to reduce the risk and uncertainly involved in drilling operations in unconventional geologic settings starts with improving the techniques for mudweight window modeling. To address this issue, an analysis of wellbore stability and well design improvement has been conducted. This study will show a systematic approach to well design by focusing on best practices for mudweight window projection for a field in Mississippi Canyon, Gulf of Mexico. The field includes depleted reservoirs and is in close proximity of salt intrusions. Analysis of offset wells has been conducted in the interest of developing an accurate picture of the subsurface environment by making connections between depth, non-productive time (NPT) events, and mudweights used. Commonly practiced petrophysical methods of pore pressure, fracture pressure, and shear failure gradient prediction have been applied to key offset wells in order to enhance the well design for two proposed wells. For the first time in the literature, the accuracy of the commonly accepted, seismic interval velocity based and the relatively new, seismic frequency based methodologies for pore pressure prediction are qualitatively and quantitatively compared for accuracy. Accuracy standards will be based on the agreement of the seismic outputs to pressure data obtained while drilling and petrophysically based pore pressure outputs for each well. The results will show significantly higher accuracy for the seismic frequency based approach in wells that were in near/sub-salt environments and higher overall accuracy for all of the wells in the study as a whole.

  5. Fundamental Use of Surgical Energy (FUSE) certification: validation and predictors of success.

    PubMed

    Robinson, Thomas N; Olasky, Jaisa; Young, Patricia; Feldman, Liane S; Fuchshuber, Pascal R; Jones, Stephanie B; Madani, Amin; Brunt, Michael; Mikami, Dean; Jackson, Gretchen P; Mischna, Jessica; Schwaitzberg, Steven; Jones, Daniel B

    2016-03-01

    The Fundamental Use of Surgical Energy (FUSE) program includes a Web-based didactic curriculum and a high-stakes multiple-choice question examination with the goal to provide certification of knowledge on the safe use of surgical energy-based devices. The purpose of this study was (1) to set a passing score through a psychometrically sound process and (2) to determine what pretest factors predicted passing the FUSE examination. Beta-testing of multiple-choice questions on 62 topics of importance to the safe use of surgical energy-based devices was performed. Eligible test takers were physicians with a minimum of 1 year of surgical training who were recruited by FUSE task force members. A pretest survey collected baseline information. A total of 227 individuals completed the FUSE beta-test, and 208 completed the pretest survey. The passing/cut score for the first test form of the FUSE multiple-choice examination was determined using the modified Angoff methodology and for the second test form was determined using a linear equating methodology. The overall passing rate across the two examination forms was 81.5%. Self-reported time studying the FUSE Web-based curriculum for a minimum of >2 h was associated with a passing examination score (p < 0.001). Performance was not different based on increased years of surgical practice (p = 0.363), self-reported expertise on one or more types of energy-based devices (p = 0.683), participation in the FUSE postgraduate course (p = 0.426), or having reviewed the FUSE manual (p = 0.428). Logistic regression found that studying the FUSE didactics for >2 h predicted a passing score (OR 3.61; 95% CI 1.44-9.05; p = 0.006) independent of the other baseline characteristics recorded. The development of the FUSE examination, including the passing score, followed a psychometrically sound process. Self-reported time studying the FUSE curriculum predicted a passing score independent of other pretest characteristics such as years in practice and self-reported expertise.

  6. Developing a methodology to predict oak wilt distribution using classification tree analysis

    Treesearch

    Marla C. Downing; Vernon L. Thomas; Robin M. Reich

    2006-01-01

    Oak wilt (Ceratocystis fagacearum), a fungal disease that causes some species of oak trees to wilt and die rapidly, is a threat to oak forested resources in 22 states in the United States. We developed a methodology for predicting the Potential Distribution of Oak Wilt (PDOW) using Anoka County, Minnesota as our study area. The PDOW utilizes GIS; the...

  7. Display/control requirements for automated VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hoffman, W. C.; Kleinman, D. L.; Young, L. R.

    1976-01-01

    A systematic design methodology for pilot displays in advanced commercial VTOL aircraft was developed and refined. The analyst is provided with a step-by-step procedure for conducting conceptual display/control configurations evaluations for simultaneous monitoring and control pilot tasks. The approach consists of three phases: formulation of information requirements, configuration evaluation, and system selection. Both the monitoring and control performance models are based upon the optimal control model of the human operator. Extensions to the conventional optimal control model required in the display design methodology include explicit optimization of control/monitoring attention; simultaneous monitoring and control performance predictions; and indifference threshold effects. The methodology was applied to NASA's experimental CH-47 helicopter in support of the VALT program. The CH-47 application examined the system performance of six flight conditions. Four candidate configurations are suggested for evaluation in pilot-in-the-loop simulations and eventual flight tests.

  8. Semi-Empirical Prediction of Aircraft Low-Speed Aerodynamic Characteristics

    NASA Technical Reports Server (NTRS)

    Olson, Erik D.

    2015-01-01

    This paper lays out a comprehensive methodology for computing a low-speed, high-lift polar, without requiring additional details about the aircraft design beyond what is typically available at the conceptual design stage. Introducing low-order, physics-based aerodynamic analyses allows the methodology to be more applicable to unconventional aircraft concepts than traditional, fully-empirical methods. The methodology uses empirical relationships for flap lift effectiveness, chord extension, drag-coefficient increment and maximum lift coefficient of various types of flap systems as a function of flap deflection, and combines these increments with the characteristics of the unflapped airfoils. Once the aerodynamic characteristics of the flapped sections are known, a vortex-lattice analysis calculates the three-dimensional lift, drag and moment coefficients of the whole aircraft configuration. This paper details the results of two validation cases: a supercritical airfoil model with several types of flaps; and a 12-foot, full-span aircraft model with slats and double-slotted flaps.

  9. MoFvAb: Modeling the Fv region of antibodies

    PubMed Central

    Bujotzek, Alexander; Fuchs, Angelika; Qu, Changtao; Benz, Jörg; Klostermann, Stefan; Antes, Iris; Georges, Guy

    2015-01-01

    Knowledge of the 3-dimensional structure of the antigen-binding region of antibodies enables numerous useful applications regarding the design and development of antibody-based drugs. We present a knowledge-based antibody structure prediction methodology that incorporates concepts that have arisen from an applied antibody engineering environment. The protocol exploits the rich and continuously growing supply of experimentally derived antibody structures available to predict CDR loop conformations and the packing of heavy and light chain quickly and without user intervention. The homology models are refined by a novel antibody-specific approach to adapt and rearrange sidechains based on their chemical environment. The method achieves very competitive all-atom root mean square deviation values in the order of 1.5 Å on different evaluation datasets consisting of both known and previously unpublished antibody crystal structures. PMID:26176812

  10. Life prediction methodology for thermal-mechanical fatigue and elevated temperature creep design

    NASA Astrophysics Data System (ADS)

    Annigeri, Ravindra

    Nickel-based superalloys are used for hot section components of gas turbine engines. Life prediction techniques are necessary to assess service damage in superalloy components resulting from thermal-mechanical fatigue (TMF) and elevated temperature creep. A new TMF life model based on continuum damage mechanics has been developed and applied to IN 738 LC substrate material with and without coating. The model also characterizes TMF failure in bulk NiCoCrAlY overlay and NiAl aluminide coatings. The inputs to the TMF life model are mechanical strain range, hold time, peak cycle temperatures and maximum stress measured from the stabilized or mid-life hysteresis loops. A viscoplastic model is used to predict the stress-strain hysteresis loops. A flow rule used in the viscoplastic model characterizes the inelastic strain rate as a function of the applied stress and a set of three internal stress variables known as back stress, drag stress and limit stress. Test results show that the viscoplastic model can reasonably predict time-dependent stress-strain response of the coated material and stress relaxation during hold times. In addition to the TMF life prediction methodology, a model has been developed to characterize the uniaxial and multiaxial creep behavior. An effective stress defined as the applied stress minus the back stress is used to characterize the creep recovery and primary creep behavior. The back stress has terms representing strain hardening, dynamic recovery and thermal recovery. Whenever the back stress is greater than the applied stress, the model predicts a negative creep rate observed during multiple stress and multiple temperature cyclic tests. The model also predicted the rupture time and the remaining life that are important for life assessment. The model has been applied to IN 738 LC, Mar-M247, bulk NiCoCrAlY overlay coating and 316 austenitic stainless steel. The proposed model predicts creep response with a reasonable accuracy for wide range of loading cases such as uniaxial tension, tension-torsion and tension-internal pressure loading.

  11. Removal of Antibiotics in Biological Wastewater Treatment Systems-A Critical Assessment Using the Activated Sludge Modeling Framework for Xenobiotics (ASM-X).

    PubMed

    Polesel, Fabio; Andersen, Henrik R; Trapp, Stefan; Plósz, Benedek Gy

    2016-10-04

    Many scientific studies present removal efficiencies for pharmaceuticals in laboratory-, pilot-, and full-scale wastewater treatment plants, based on observations that may be impacted by theoretical and methodological approaches used. In this Critical Review, we evaluated factors influencing observed removal efficiencies of three antibiotics (sulfamethoxazole, ciprofloxacin, tetracycline) in pilot- and full-scale biological treatment systems. Factors assessed include (i) retransformation to parent pharmaceuticals from e.g., conjugated metabolites and analogues, (ii) solid retention time (SRT), (iii) fractions sorbed onto solids, and (iv) dynamics in influent and effluent loading. A recently developed methodology was used, relying on the comparison of removal efficiency predictions (obtained with the Activated Sludge Model for Xenobiotics (ASM-X)) with representative measured data from literature. By applying this methodology, we demonstrated that (a) the elimination of sulfamethoxazole may be significantly underestimated when not considering retransformation from conjugated metabolites, depending on the type (urban or hospital) and size of upstream catchments; (b) operation at extended SRT may enhance antibiotic removal, as shown for sulfamethoxazole; (c) not accounting for fractions sorbed in influent and effluent solids may cause slight underestimation of ciprofloxacin removal efficiency. Using tetracycline as example substance, we ultimately evaluated implications of effluent dynamics and retransformation on environmental exposure and risk prediction.

  12. An endochronic theory for transversely isotropic fibrous composites

    NASA Technical Reports Server (NTRS)

    Pindera, M. J.; Herakovich, C. T.

    1981-01-01

    A rational methodology of modelling both nonlinear and elastic dissipative response of transversely isotropic fibrous composites is developed and illustrated with the aid of the observed response of graphite-polyimide off-axis coupons. The methodology is based on the internal variable formalism employed within the text of classical irreversible thermodynamics and entails extension of Valanis' endochronic theory to transversely isotropic media. Applicability of the theory to prediction of various response characteristics of fibrous composites is illustrated by accurately modelling such often observed phenomena as: stiffening reversible behavior along fiber direction; dissipative response in shear and transverse tension characterized by power-laws with different hardening exponents; permanent strain accumulation; nonlinear unloading and reloading; and stress-interaction effects.

  13. Juncture flow improvement for wing/pylon configurations by using CFD methodology

    NASA Technical Reports Server (NTRS)

    Gea, Lie-Mine; Chyu, Wei J.; Stortz, Michael W.; Chow, Chuen-Yen

    1993-01-01

    Transonic flow field around a fighter wing/pylon configuration was simulated by using an implicit upwinding Navier-Stokes flow solver (F3D) and overset grid technology (Chimera). Flow separation and local shocks near the wing/pylon junction were observed in flight and predicted by numerical calculations. A new pylon/fairing shape was proposed to improve the flow quality. Based on numerical results, the size of separation area is significantly reduced and the onset of separation is delayed farther downstream. A smoother pressure gradient is also obtained near the junction area. This paper demonstrates that computational fluid dynamics (CFD) methodology can be used as a practical tool for aircraft design.

  14. Determination of fragrance content in perfume by Raman spectroscopy and multivariate calibration.

    PubMed

    Godinho, Robson B; Santos, Mauricio C; Poppi, Ronei J

    2016-03-15

    An alternative methodology is herein proposed for determination of fragrance content in perfumes and their classification according to the guidelines established by fine perfume manufacturers. The methodology is based on Raman spectroscopy associated with multivariate calibration, allowing the determination of fragrance content in a fast, nondestructive, and sustainable manner. The results were considered consistent with the conventional method, whose standard error of prediction values was lower than the 1.0%. This result indicates that the proposed technology is a feasible analytical tool for determination of the fragrance content in a hydro-alcoholic solution for use in manufacturing, quality control and regulatory agencies. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Paper-based and web-based intervention modeling experiments identified the same predictors of general practitioners' antibiotic-prescribing behavior.

    PubMed

    Treweek, Shaun; Bonetti, Debbie; Maclennan, Graeme; Barnett, Karen; Eccles, Martin P; Jones, Claire; Pitts, Nigel B; Ricketts, Ian W; Sullivan, Frank; Weal, Mark; Francis, Jill J

    2014-03-01

    To evaluate the robustness of the intervention modeling experiment (IME) methodology as a way of developing and testing behavioral change interventions before a full-scale trial by replicating an earlier paper-based IME. Web-based questionnaire and clinical scenario study. General practitioners across Scotland were invited to complete the questionnaire and scenarios, which were then used to identify predictors of antibiotic-prescribing behavior. These predictors were compared with the predictors identified in an earlier paper-based IME and used to develop a new intervention. Two hundred seventy general practitioners completed the questionnaires and scenarios. The constructs that predicted simulated behavior and intention were attitude, perceived behavioral control, risk perception/anticipated consequences, and self-efficacy, which match the targets identified in the earlier paper-based IME. The choice of persuasive communication as an intervention in the earlier IME was also confirmed. Additionally, a new intervention, an action plan, was developed. A web-based IME replicated the findings of an earlier paper-based IME, which provides confidence in the IME methodology. The interventions will now be evaluated in the next stage of the IME, a web-based randomized controlled trial. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Prediction of brain tissue temperature using near-infrared spectroscopy

    PubMed Central

    Holper, Lisa; Mitra, Subhabrata; Bale, Gemma; Robertson, Nicola; Tachtsidis, Ilias

    2017-01-01

    Abstract. Broadband near-infrared spectroscopy (NIRS) can provide an endogenous indicator of tissue temperature based on the temperature dependence of the water absorption spectrum. We describe a first evaluation of the calibration and prediction of brain tissue temperature obtained during hypothermia in newborn piglets (animal dataset) and rewarming in newborn infants (human dataset) based on measured body (rectal) temperature. The calibration using partial least squares regression proved to be a reliable method to predict brain tissue temperature with respect to core body temperature in the wavelength interval of 720 to 880 nm with a strong mean predictive power of R2=0.713±0.157 (animal dataset) and R2=0.798±0.087 (human dataset). In addition, we applied regression receiver operating characteristic curves for the first time to evaluate the temperature prediction, which provided an overall mean error bias between NIRS predicted brain temperature and body temperature of 0.436±0.283°C (animal dataset) and 0.162±0.149°C (human dataset). We discuss main methodological aspects, particularly the well-known aspect of over- versus underestimation between brain and body temperature, which is relevant for potential clinical applications. PMID:28630878

  17. T-Epitope Designer: A HLA-peptide binding prediction server.

    PubMed

    Kangueane, Pandjassarame; Sakharkar, Meena Kishore

    2005-05-15

    The current challenge in synthetic vaccine design is the development of a methodology to identify and test short antigen peptides as potential T-cell epitopes. Recently, we described a HLA-peptide binding model (using structural properties) capable of predicting peptides binding to any HLA allele. Consequently, we have developed a web server named T-EPITOPE DESIGNER to facilitate HLA-peptide binding prediction. The prediction server is based on a model that defines peptide binding pockets using information gleaned from X-ray crystal structures of HLA-peptide complexes, followed by the estimation of peptide binding to binding pockets. Thus, the prediction server enables the calculation of peptide binding to HLA alleles. This model is superior to many existing methods because of its potential application to any given HLA allele whose sequence is clearly defined. The web server finds potential application in T cell epitope vaccine design. http://www.bioinformation.net/ted/

  18. Prediction of genotoxic potential of cosmetic ingredients by an in silico battery system consisting of a combination of an expert rule-based system and a statistics-based system.

    PubMed

    Aiba née Kaneko, Maki; Hirota, Morihiko; Kouzuki, Hirokazu; Mori, Masaaki

    2015-02-01

    Genotoxicity is the most commonly used endpoint to predict the carcinogenicity of chemicals. The International Conference on Harmonization (ICH) M7 Guideline on Assessment and Control of DNA Reactive (Mutagenic) Impurities in Pharmaceuticals to Limit Potential Carcinogenic Risk offers guidance on (quantitative) structure-activity relationship ((Q)SAR) methodologies that predict the outcome of bacterial mutagenicity assay for actual and potential impurities. We examined the effectiveness of the (Q)SAR approach with the combination of DEREK NEXUS as an expert rule-based system and ADMEWorks as a statistics-based system for the prediction of not only mutagenic potential in the Ames test, but also genotoxic potential in mutagenicity and clastogenicity tests, using a data set of 342 chemicals extracted from the literature. The prediction of mutagenic potential or genotoxic potential by DEREK NEXUS or ADMEWorks showed high values of sensitivity and concordance, while prediction by the combination of DEREK NEXUS and ADMEWorks (battery system) showed the highest values of sensitivity and concordance among the three methods, but the lowest value of specificity. The number of false negatives was reduced with the battery system. We also separately predicted the mutagenic potential and genotoxic potential of 41 cosmetic ingredients listed in the International Nomenclature of Cosmetic Ingredients (INCI) among the 342 chemicals. Although specificity was low with the battery system, sensitivity and concordance were high. These results suggest that the battery system consisting of DEREK NEXUS and ADMEWorks is useful for prediction of genotoxic potential of chemicals, including cosmetic ingredients.

  19. Improving stability of prediction models based on correlated omics data by using network approaches.

    PubMed

    Tissier, Renaud; Houwing-Duistermaat, Jeanine; Rodríguez-Girondo, Mar

    2018-01-01

    Building prediction models based on complex omics datasets such as transcriptomics, proteomics, metabolomics remains a challenge in bioinformatics and biostatistics. Regularized regression techniques are typically used to deal with the high dimensionality of these datasets. However, due to the presence of correlation in the datasets, it is difficult to select the best model and application of these methods yields unstable results. We propose a novel strategy for model selection where the obtained models also perform well in terms of overall predictability. Several three step approaches are considered, where the steps are 1) network construction, 2) clustering to empirically derive modules or pathways, and 3) building a prediction model incorporating the information on the modules. For the first step, we use weighted correlation networks and Gaussian graphical modelling. Identification of groups of features is performed by hierarchical clustering. The grouping information is included in the prediction model by using group-based variable selection or group-specific penalization. We compare the performance of our new approaches with standard regularized regression via simulations. Based on these results we provide recommendations for selecting a strategy for building a prediction model given the specific goal of the analysis and the sizes of the datasets. Finally we illustrate the advantages of our approach by application of the methodology to two problems, namely prediction of body mass index in the DIetary, Lifestyle, and Genetic determinants of Obesity and Metabolic syndrome study (DILGOM) and prediction of response of each breast cancer cell line to treatment with specific drugs using a breast cancer cell lines pharmacogenomics dataset.

  20. Reliability Prediction Analysis: Airborne System Results and Best Practices

    NASA Astrophysics Data System (ADS)

    Silva, Nuno; Lopes, Rui

    2013-09-01

    This article presents the results of several reliability prediction analysis for aerospace components, made by both methodologies, the 217F and the 217Plus. Supporting and complementary activities are described, as well as the differences concerning the results and the applications of both methodologies that are summarized in a set of lessons learned that are very useful for RAMS and Safety Prediction practitioners.The effort that is required for these activities is also an important point that is discussed, as is the end result and their interpretation/impact on the system design.The article concludes while positioning these activities and methodologies in an overall process for space and aeronautics equipment/components certification, and highlighting their advantages. Some good practices have also been summarized and some reuse rules have been laid down.

  1. Development of a Practical Methodology for Elastic-Plastic and Fully Plastic Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    McClung, R. C.; Chell, G. G.; Lee, Y. -D.; Russell, D. A.; Orient, G. E.

    1999-01-01

    A practical engineering methodology has been developed to analyze and predict fatigue crack growth rates under elastic-plastic and fully plastic conditions. The methodology employs the closure-corrected effective range of the J-integral, delta J(sub eff) as the governing parameter. The methodology contains original and literature J and delta J solutions for specific geometries, along with general methods for estimating J for other geometries and other loading conditions, including combined mechanical loading and combined primary and secondary loading. The methodology also contains specific practical algorithms that translate a J solution into a prediction of fatigue crack growth rate or life, including methods for determining crack opening levels, crack instability conditions, and material properties. A critical core subset of the J solutions and the practical algorithms has been implemented into independent elastic-plastic NASGRO modules. All components of the entire methodology, including the NASGRO modules, have been verified through analysis and experiment, and limits of applicability have been identified.

  2. Development of a Practical Methodology for Elastic-Plastic and Fully Plastic Fatigue Crack Growth

    NASA Technical Reports Server (NTRS)

    McClung, R. C.; Chell, G. G.; Lee, Y.-D.; Russell, D. A.; Orient, G. E.

    1999-01-01

    A practical engineering methodology has been developed to analyze and predict fatigue crack growth rates under elastic-plastic and fully plastic conditions. The methodology employs the closure-corrected effective range of the J-integral, (Delta)J(sub eff), as the governing parameter. The methodology contains original and literature J and (Delta)J solutions for specific geometries, along with general methods for estimating J for other geometries and other loading conditions, including combined mechanical loading and combined primary and secondary loading. The methodology also contains specific practical algorithms that translate a J solution into a prediction of fatigue crack growth rate or life, including methods for determining crack opening levels, crack instability conditions, and material properties. A critical core subset of the J solutions and the practical algorithms has been implemented into independent elastic-plastic NASGRO modules. All components of the entire methodology, including the NASGRO modules, have been verified through analysis and experiment, and limits of applicability have been identified.

  3. Mothers' Attachment Status as Determined by the Adult Attachment Interview Predicts Their 6-Year-Olds' Reunion Responses: A Study Conducted in Japan

    ERIC Educational Resources Information Center

    Behrens, Kazuko Y.; Hesse, Erik; Main, Mary

    2007-01-01

    Following a 1986 study reporting a predominance of ambivalent attachment among insecure Sapporo infants, the generalizability of attachment theory and methodologies to Japanese samples has been questioned. In this 2nd study of Sapporo mother-child dyads (N = 43), the authors examined attachment distributions for both (a) child, based on M. Main…

  4. Monitoring the effects of extreme climate disturbances on forest health in the northeast U.S.

    Treesearch

    Allan N.D. Auclair; Warren E. Heilman; Peter Busalacchi

    2002-01-01

    No methodology has been developed to date to predict when a forest population is at risk to specific climate and air pollution stressors. Yet, this information is important to natural resource managers who need frequent, updated assessments of forest health upon which to base management decisions and respond to public concerns on forest health. The USDA Forest Service...

  5. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  6. Fuzzy Integration of Support Vector Regression Models for Anticipatory Control of Complex Energy Systems

    DOE PAGES

    Alamaniotis, Miltiadis; Agarwal, Vivek

    2014-04-01

    Anticipatory control systems are a class of systems whose decisions are based on predictions for the future state of the system under monitoring. Anticipation denotes intelligence and is an inherent property of humans that make decisions by projecting in future. Likewise, artificially intelligent systems equipped with predictive functions may be utilized for anticipating future states of complex systems, and therefore facilitate automated control decisions. Anticipatory control of complex energy systems is paramount to their normal and safe operation. In this paper a new intelligent methodology integrating fuzzy inference with support vector regression is introduced. Our proposed methodology implements an anticipatorymore » system aiming at controlling energy systems in a robust way. Initially a set of support vector regressors is adopted for making predictions over critical system parameters. Furthermore, the predicted values are fed into a two stage fuzzy inference system that makes decisions regarding the state of the energy system. The inference system integrates the individual predictions into a single one at its first stage, and outputs a decision together with a certainty factor computed at its second stage. The certainty factor is an index of the significance of the decision. The proposed anticipatory control system is tested on a real world set of data obtained from a complex energy system, describing the degradation of a turbine. Results exhibit the robustness of the proposed system in controlling complex energy systems.« less

  7. Adaptive inferential sensors based on evolving fuzzy models.

    PubMed

    Angelov, Plamen; Kordon, Arthur

    2010-04-01

    A new technique to the design and use of inferential sensors in the process industry is proposed in this paper, which is based on the recently introduced concept of evolving fuzzy models (EFMs). They address the challenge that the modern process industry faces today, namely, to develop such adaptive and self-calibrating online inferential sensors that reduce the maintenance costs while keeping the high precision and interpretability/transparency. The proposed new methodology makes possible inferential sensors to recalibrate automatically, which reduces significantly the life-cycle efforts for their maintenance. This is achieved by the adaptive and flexible open-structure EFM used. The novelty of this paper lies in the following: (1) the overall concept of inferential sensors with evolving and self-developing structure from the data streams; (2) the new methodology for online automatic selection of input variables that are most relevant for the prediction; (3) the technique to detect automatically a shift in the data pattern using the age of the clusters (and fuzzy rules); (4) the online standardization technique used by the learning procedure of the evolving model; and (5) the application of this innovative approach to several real-life industrial processes from the chemical industry (evolving inferential sensors, namely, eSensors, were used for predicting the chemical properties of different products in The Dow Chemical Company, Freeport, TX). It should be noted, however, that the methodology and conclusions of this paper are valid for the broader area of chemical and process industries in general. The results demonstrate that well-interpretable and with-simple-structure inferential sensors can automatically be designed from the data stream in real time, which predict various process variables of interest. The proposed approach can be used as a basis for the development of a new generation of adaptive and evolving inferential sensors that can address the challenges of the modern advanced process industry.

  8. Modeling Long-term Creep Performance for Welded Nickel-base Superalloy Structures for Power Generation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Chen; Gupta, Vipul; Huang, Shenyan

    The goal of this project is to model long-term creep performance for nickel-base superalloy weldments in high temperature power generation systems. The project uses physics-based modeling methodologies and algorithms for predicting alloy properties in heterogeneous material structures. The modeling methodology will be demonstrated on a gas turbine combustor liner weldment of Haynes 282 precipitate-strengthened nickel-base superalloy. The major developments are: (1) microstructure-property relationships under creep conditions and microstructure characterization (2) modeling inhomogeneous microstructure in superalloy weld (3) modeling mesoscale plastic deformation in superalloy weld and (4) a constitutive creep model that accounts for weld and base metal microstructure and theirmore » long term evolution. The developed modeling technology is aimed to provide a more efficient and accurate assessment of a material’s long-term performance compared with current testing and extrapolation methods. This modeling technology will also accelerate development and qualification of new materials in advanced power generation systems. This document is a final technical report for the project, covering efforts conducted from October 2014 to December 2016.« less

  9. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  10. New systematic methodology for incorporating dynamic heat transfer modelling in multi-phase biochemical reactors.

    PubMed

    Fernández-Arévalo, T; Lizarralde, I; Grau, P; Ayesa, E

    2014-09-01

    This paper presents a new modelling methodology for dynamically predicting the heat produced or consumed in the transformations of any biological reactor using Hess's law. Starting from a complete description of model components stoichiometry and formation enthalpies, the proposed modelling methodology has integrated successfully the simultaneous calculation of both the conventional mass balances and the enthalpy change of reaction in an expandable multi-phase matrix structure, which facilitates a detailed prediction of the main heat fluxes in the biochemical reactors. The methodology has been implemented in a plant-wide modelling methodology in order to facilitate the dynamic description of mass and heat throughout the plant. After validation with literature data, as illustrative examples of the capability of the methodology, two case studies have been described. In the first one, a predenitrification-nitrification dynamic process has been analysed, with the aim of demonstrating the easy integration of the methodology in any system. In the second case study, the simulation of a thermal model for an ATAD has shown the potential of the proposed methodology for analysing the effect of ventilation and influent characterization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Temperature - Emissivity Separation Assessment in a Sub-Urban Scenario

    NASA Astrophysics Data System (ADS)

    Moscadelli, M.; Diani, M.; Corsini, G.

    2017-10-01

    In this paper, a methodology that aims at evaluating the effectiveness of different TES strategies is presented. The methodology takes into account the specific material of interest in the monitored scenario, sensor characteristics, and errors in the atmospheric compensation step. The methodology is proposed in order to predict and analyse algorithms performances during the planning of a remote sensing mission, aimed to discover specific materials of interest in the monitored scenario. As case study, the proposed methodology is applied to a real airborne data set of a suburban scenario. In order to perform the TES problem, three state-of-the-art algorithms, and a recently proposed one, are investigated: Temperature-Emissivity Separation '98 (TES-98) algorithm, Stepwise Refining TES (SRTES) algorithm, Linear piecewise TES (LTES) algorithm, and Optimized Smoothing TES (OSTES) algorithm. At the end, the accuracy obtained with real data, and the ones predicted by means of the proposed methodology are compared and discussed.

  12. Gyrokinetic modelling of the quasilinear particle flux for plasmas with neutral-beam fuelling

    NASA Astrophysics Data System (ADS)

    Narita, E.; Honda, M.; Nakata, M.; Yoshida, M.; Takenaga, H.; Hayashi, N.

    2018-02-01

    A quasilinear particle flux is modelled based on gyrokinetic calculations. The particle flux is estimated by determining factors, namely, coefficients of off-diagonal terms and a particle diffusivity. In this paper, the methodology to estimate the factors is presented using a subset of JT-60U plasmas. First, the coefficients of off-diagonal terms are estimated by linear gyrokinetic calculations. Next, to obtain the particle diffusivity, a semi-empirical approach is taken. Most experimental analyses for particle transport have assumed that turbulent particle fluxes are zero in the core region. On the other hand, even in the stationary state, the plasmas in question have a finite turbulent particle flux due to neutral-beam fuelling. By combining estimates of the experimental turbulent particle flux and the coefficients of off-diagonal terms calculated earlier, the particle diffusivity is obtained. The particle diffusivity should reflect a saturation amplitude of instabilities. The particle diffusivity is investigated in terms of the effects of the linear instability and linear zonal flow response, and it is found that a formula including these effects roughly reproduces the particle diffusivity. The developed framework for prediction of the particle flux is flexible to add terms neglected in the current model. The methodology to estimate the quasilinear particle flux requires so low computational cost that a database consisting of the resultant coefficients of off-diagonal terms and particle diffusivity can be constructed to train a neural network. The development of the methodology is the first step towards a neural-network-based particle transport model for fast prediction of the particle flux.

  13. A Model-Free Machine Learning Method for Risk Classification and Survival Probability Prediction.

    PubMed

    Geng, Yuan; Lu, Wenbin; Zhang, Hao Helen

    2014-01-01

    Risk classification and survival probability prediction are two major goals in survival data analysis since they play an important role in patients' risk stratification, long-term diagnosis, and treatment selection. In this article, we propose a new model-free machine learning framework for risk classification and survival probability prediction based on weighted support vector machines. The new procedure does not require any specific parametric or semiparametric model assumption on data, and is therefore capable of capturing nonlinear covariate effects. We use numerous simulation examples to demonstrate finite sample performance of the proposed method under various settings. Applications to a glioma tumor data and a breast cancer gene expression survival data are shown to illustrate the new methodology in real data analysis.

  14. Spam comments prediction using stacking with ensemble learning

    NASA Astrophysics Data System (ADS)

    Mehmood, Arif; On, Byung-Won; Lee, Ingyu; Ashraf, Imran; Choi, Gyu Sang

    2018-01-01

    Illusive comments of product or services are misleading for people in decision making. The current methodologies to predict deceptive comments are concerned for feature designing with single training model. Indigenous features have ability to show some linguistic phenomena but are hard to reveal the latent semantic meaning of the comments. We propose a prediction model on general features of documents using stacking with ensemble learning. Term Frequency/Inverse Document Frequency (TF/IDF) features are inputs to stacking of Random Forest and Gradient Boosted Trees and the outputs of the base learners are encapsulated with decision tree to make final training of the model. The results exhibits that our approach gives the accuracy of 92.19% which outperform the state-of-the-art method.

  15. A Progressive Damage Methodology for Residual Strength Predictions of Center-Crack Tension Composite Panels

    NASA Technical Reports Server (NTRS)

    Coats, Timothy William

    1996-01-01

    An investigation of translaminate fracture and a progressive damage methodology was conducted to evaluate and develop a residual strength prediction capability for laminated composites with through penetration notches. This is relevant to the damage tolerance of an aircraft fuselage that might suffer an in-flight accident such as an uncontained engine failure. An experimental characterization of several composite materials systems revealed an R-curve type of behavior. Fractographic examinations led to the postulate that this crack growth resistance could be due to fiber bridging, defined here as fractured fibers of one ply bridged by intact fibers of an adjacent ply. The progressive damage methodology is currently capable of predicting the initiation and growth of matrix cracks and fiber fracture. Using two difference fiber failure criteria, residual strength was predicted for different size panel widths and notch lengths. A ply discount fiber failure criterion yielded extremely conservative results while an elastic-perfectly plastic fiber failure criterion showed that the fiber bridging concept is valid for predicting residual strength for tensile dominated failure loads. Furthermore, the R-curves predicted by the model using the elastic-perfectly plastic fiber criterion compared very well with the experimental R-curves.

  16. Structure and thermal properties of salicylate-based-protic ionic liquids as new heat storage media. COSMO-RS structure characterization and modeling of heat capacities.

    PubMed

    Jacquemin, Johan; Feder-Kubis, Joanna; Zorębski, Michał; Grzybowska, Katarzyna; Chorążewski, Mirosław; Hensel-Bielówka, Stella; Zorębski, Edward; Paluch, Marian; Dzida, Marzena

    2014-02-28

    During this research, we present a study on the thermal properties, such as the melting, cold crystallization, and glass transition temperatures as well as heat capacities from 293.15 K to 323.15 K of nine in-house synthesized protic ionic liquids based on the 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate ([H-Im-C1OC(n)][Sal]) with n = 3-11. The 3D structures, surface charge distributions and COSMO volumes of all investigated ions are obtained by combining DFT calculations and the COSMO-RS methodology. The heat capacity data sets as a function of temperature of the 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate are then predicted using the methodology originally proposed in the case of ionic liquids by Ge et al. 3-(Alkoxymethyl)-1H-imidazol-3-ium salicylate based ionic liquids present specific heat capacities higher in many cases than other ionic liquids that make them suitable as heat storage media and in heat transfer processes. It was found experimentally that the heat capacity increases linearly with increasing alkyl chain length of the alkoxymethyl group of 3-(alkoxymethyl)-1H-imidazol-3-ium salicylate as was expected and predicted using the Ge et al. method with an overall relative absolute deviation close to 3.2% for temperatures up to 323.15 K.

  17. Utilizing sensory prediction errors for movement intention decoding: A new methodology

    PubMed Central

    Nakamura, Keigo; Ando, Hideyuki

    2018-01-01

    We propose a new methodology for decoding movement intentions of humans. This methodology is motivated by the well-documented ability of the brain to predict sensory outcomes of self-generated and imagined actions using so-called forward models. We propose to subliminally stimulate the sensory modality corresponding to a user’s intended movement, and decode a user’s movement intention from his electroencephalography (EEG), by decoding for prediction errors—whether the sensory prediction corresponding to a user’s intended movement matches the subliminal sensory stimulation we induce. We tested our proposal in a binary wheelchair turning task in which users thought of turning their wheelchair either left or right. We stimulated their vestibular system subliminally, toward either the left or the right direction, using a galvanic vestibular stimulator and show that the decoding for prediction errors from the EEG can radically improve movement intention decoding performance. We observed an 87.2% median single-trial decoding accuracy across tested participants, with zero user training, within 96 ms of the stimulation, and with no additional cognitive load on the users because the stimulation was subliminal. PMID:29750195

  18. Predicting Great Lakes fish yields: tools and constraints

    USGS Publications Warehouse

    Lewis, C.A.; Schupp, D.H.; Taylor, W.W.; Collins, J.J.; Hatch, Richard W.

    1987-01-01

    Prediction of yield is a critical component of fisheries management. The development of sound yield prediction methodology and the application of the results of yield prediction are central to the evolution of strategies to achieve stated goals for Great Lakes fisheries and to the measurement of progress toward those goals. Despite general availability of species yield models, yield prediction for many Great Lakes fisheries has been poor due to the instability of the fish communities and the inadequacy of available data. A host of biological, institutional, and societal factors constrain both the development of sound predictions and their application to management. Improved predictive capability requires increased stability of Great Lakes fisheries through rehabilitation of well-integrated communities, improvement of data collection, data standardization and information-sharing mechanisms, and further development of the methodology for yield prediction. Most important is the creation of a better-informed public that will in turn establish the political will to do what is required.

  19. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    PubMed

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  20. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection

    PubMed Central

    Yang, Su; Shi, Shixiong; Hu, Xiaobing; Wang, Minjie

    2015-01-01

    Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1) Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2) The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3) The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks. PMID:26496370

  1. Oligomerization of G protein-coupled receptors: computational methods.

    PubMed

    Selent, J; Kaczor, A A

    2011-01-01

    Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.

  2. Modeling and life prediction methodology for Titanium Matrix Composites subjected to mission profiles

    NASA Technical Reports Server (NTRS)

    Mirdamadi, M.; Johnson, W. S.

    1994-01-01

    Titanium matrix composites (TMC) are being evaluated as structural materials for elevated temperature applications in future generation hypersonic vehicles. In such applications, TMC components are subjected to complex thermomechanical loading profiles at various elevated temperatures. Therefore, thermomechanical fatigue (TMF) testing, using a simulated mission profile, is essential for evaluation and development of life prediction methodologies. The objective of the research presented in this paper was to evaluate the TMF response of the (0/90)2s SCS-6/Timetal-21S subjected to a generic hypersonic flight profile and its portions with a temperature ranging from -130 C to 816 C. It was found that the composite modulus, prior to rapid degradation, had consistent values for all the profiles tested. A micromechanics based analysis was used to predict the stress-strain response of the laminate and of the constituents in each ply during thermomechanical loading conditions by using only constituent properties as input. The fiber was modeled as elastic with transverse orthotropic and temperature dependent properties. The matrix was modeled using a thermoviscoplastic constitutive relation. In the analysis, the composite modulus degradation was assumed to result from matrix cracking and was modeled by reducing the matrix modulus. Fatigue lives of the composite subjected to the complex generic hypersonic flight profile were well correlated using the predicted stress in 0 degree fibers.

  3. Phylogenomics of plant genomes: a methodology for genome-wide searches for orthologs in plants

    PubMed Central

    Conte, Matthieu G; Gaillard, Sylvain; Droc, Gaetan; Perin, Christophe

    2008-01-01

    Background Gene ortholog identification is now a major objective for mining the increasing amount of sequence data generated by complete or partial genome sequencing projects. Comparative and functional genomics urgently need a method for ortholog detection to reduce gene function inference and to aid in the identification of conserved or divergent genetic pathways between several species. As gene functions change during evolution, reconstructing the evolutionary history of genes should be a more accurate way to differentiate orthologs from paralogs. Phylogenomics takes into account phylogenetic information from high-throughput genome annotation and is the most straightforward way to infer orthologs. However, procedures for automatic detection of orthologs are still scarce and suffer from several limitations. Results We developed a procedure for ortholog prediction between Oryza sativa and Arabidopsis thaliana. Firstly, we established an efficient method to cluster A. thaliana and O. sativa full proteomes into gene families. Then, we developed an optimized phylogenomics pipeline for ortholog inference. We validated the full procedure using test sets of orthologs and paralogs to demonstrate that our method outperforms pairwise methods for ortholog predictions. Conclusion Our procedure achieved a high level of accuracy in predicting ortholog and paralog relationships. Phylogenomic predictions for all validated gene families in both species were easily achieved and we can conclude that our methodology outperforms similarly based methods. PMID:18426584

  4. Long-term ensemble forecast of snowmelt inflow into the Cheboksary Reservoir under two different weather scenarios

    NASA Astrophysics Data System (ADS)

    Gelfan, Alexander; Moreydo, Vsevolod; Motovilov, Yury; Solomatine, Dimitri P.

    2018-04-01

    A long-term forecasting ensemble methodology, applied to water inflows into the Cheboksary Reservoir (Russia), is presented. The methodology is based on a version of the semi-distributed hydrological model ECOMAG (ECOlogical Model for Applied Geophysics) that allows for the calculation of an ensemble of inflow hydrographs using two different sets of weather ensembles for the lead time period: observed weather data, constructed on the basis of the Ensemble Streamflow Prediction methodology (ESP-based forecast), and synthetic weather data, simulated by a multi-site weather generator (WG-based forecast). We have studied the following: (1) whether there is any advantage of the developed ensemble forecasts in comparison with the currently issued operational forecasts of water inflow into the Cheboksary Reservoir, and (2) whether there is any noticeable improvement in probabilistic forecasts when using the WG-simulated ensemble compared to the ESP-based ensemble. We have found that for a 35-year period beginning from the reservoir filling in 1982, both continuous and binary model-based ensemble forecasts (issued in the deterministic form) outperform the operational forecasts of the April-June inflow volume actually used and, additionally, provide acceptable forecasts of additional water regime characteristics besides the inflow volume. We have also demonstrated that the model performance measures (in the verification period) obtained from the WG-based probabilistic forecasts, which are based on a large number of possible weather scenarios, appeared to be more statistically reliable than the corresponding measures calculated from the ESP-based forecasts based on the observed weather scenarios.

  5. New insights into soil temperature time series modeling: linear or nonlinear?

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Moeeni, Hamid; Ebtehaj, Isa; Zeynoddin, Mohammad; Mahoammadian, Abdolmajid; Gharabaghi, Bahram

    2018-03-01

    Soil temperature (ST) is an important dynamic parameter, whose prediction is a major research topic in various fields including agriculture because ST has a critical role in hydrological processes at the soil surface. In this study, a new linear methodology is proposed based on stochastic methods for modeling daily soil temperature (DST). With this approach, the ST series components are determined to carry out modeling and spectral analysis. The results of this process are compared with two linear methods based on seasonal standardization and seasonal differencing in terms of four DST series. The series used in this study were measured at two stations, Champaign and Springfield, at depths of 10 and 20 cm. The results indicate that in all ST series reviewed, the periodic term is the most robust among all components. According to a comparison of the three methods applied to analyze the various series components, it appears that spectral analysis combined with stochastic methods outperformed the seasonal standardization and seasonal differencing methods. In addition to comparing the proposed methodology with linear methods, the ST modeling results were compared with the two nonlinear methods in two forms: considering hydrological variables (HV) as input variables and DST modeling as a time series. In a previous study at the mentioned sites, Kim and Singh Theor Appl Climatol 118:465-479, (2014) applied the popular Multilayer Perceptron (MLP) neural network and Adaptive Neuro-Fuzzy Inference System (ANFIS) nonlinear methods and considered HV as input variables. The comparison results signify that the relative error projected in estimating DST by the proposed methodology was about 6%, while this value with MLP and ANFIS was over 15%. Moreover, MLP and ANFIS models were employed for DST time series modeling. Due to these models' relatively inferior performance to the proposed methodology, two hybrid models were implemented: the weights and membership function of MLP and ANFIS (respectively) were optimized with the particle swarm optimization (PSO) algorithm in conjunction with the wavelet transform and nonlinear methods (Wavelet-MLP & Wavelet-ANFIS). A comparison of the proposed methodology with individual and hybrid nonlinear models in predicting DST time series indicates the lowest Akaike Information Criterion (AIC) index value, which considers model simplicity and accuracy simultaneously at different depths and stations. The methodology presented in this study can thus serve as an excellent alternative to complex nonlinear methods that are normally employed to examine DST.

  6. Protein secondary structure and stability determined by combining exoproteolysis and matrix-assisted laser desorption/ionization time-of-flight mass spectrometry.

    PubMed

    Villanueva, Josep; Villegas, Virtudes; Querol, Enrique; Avilés, Francesc X; Serrano, Luis

    2002-09-01

    In the post-genomic era, several projects focused on the massive experimental resolution of the three-dimensional structures of all the proteins of different organisms have been initiated. Simultaneously, significant progress has been made in the ab initio prediction of protein three-dimensional structure. One of the keys to the success of such a prediction is the use of local information (i.e. secondary structure). Here we describe a new limited proteolysis methodology, based on the use of unspecific exoproteases coupled with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS), to map quickly secondary structure elements of a protein from both ends, the N- and C-termini. We show that the proteolytic patterns (mass spectra series) obtained can be interpreted in the light of the conformation and local stability of the analyzed proteins, a direct correlation being observed between the predicted and the experimentally derived protein secondary structure. Further, this methodology can be easily applied to check rapidly the folding state of a protein and characterize mutational effects on protein conformation and stability. Moreover, given global stability information, this methodology allows one to locate the protein regions of increased or decreased conformational stability. All of this can be done with a small fraction of the amount of protein required by most of the other methods for conformational analysis. Thus limited exoproteolysis, together with MALDI-TOF MS, can be a useful tool to achieve quickly the elucidation of protein structure and stability. Copyright 2002 John Wiley & Sons, Ltd.

  7. Prediction of Composite Pressure Vessel Failure Location using Fiber Bragg Grating Sensors

    NASA Technical Reports Server (NTRS)

    Kreger, Steven T.; Taylor, F. Tad; Ortyl, Nicholas E.; Grant, Joseph

    2006-01-01

    Ten composite pressure vessels were instrumented with fiber Bragg grating sensors in order to assess the strain levels of the vessel under various loading conditions. This paper and presentation will discuss the testing methodology, the test results, compare the testing results to the analytical model, and present a possible methodology for predicting the failure location and strain level of composite pressure vessels.

  8. Knowledge-based computational intelligence development for predicting protein secondary structures from sequences.

    PubMed

    Shen, Hong-Bin; Yi, Dong-Liang; Yao, Li-Xiu; Yang, Jie; Chou, Kuo-Chen

    2008-10-01

    In the postgenomic age, with the avalanche of protein sequences generated and relatively slow progress in determining their structures by experiments, it is important to develop automated methods to predict the structure of a protein from its sequence. The membrane proteins are a special group in the protein family that accounts for approximately 30% of all proteins; however, solved membrane protein structures only represent less than 1% of known protein structures to date. Although a great success has been achieved for developing computational intelligence techniques to predict secondary structures in both globular and membrane proteins, there is still much challenging work in this regard. In this review article, we firstly summarize the recent progress of automation methodology development in predicting protein secondary structures, especially in membrane proteins; we will then give some future directions in this research field.

  9. Real-Time Prognostics of a Rotary Valve Actuator

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew

    2015-01-01

    Valves are used in many domains and often have system-critical functions. As such, it is important to monitor the health of valves and their actuators and predict remaining useful life. In this work, we develop a model-based prognostics approach for a rotary valve actuator. Due to limited observability of the component with multiple failure modes, a lumped damage approach is proposed for estimation and prediction of damage progression. In order to support the goal of real-time prognostics, an approach to prediction is developed that does not require online simulation to compute remaining life, rather, a function mapping the damage state to remaining useful life is found offline so that predictions can be made quickly online with a single function evaluation. Simulation results demonstrate the overall methodology, validating the lumped damage approach and demonstrating real-time prognostics.

  10. Orbital and maxillofacial computer aided surgery: patient-specific finite element models to predict surgical outcomes.

    PubMed

    Luboz, Vincent; Chabanas, Matthieu; Swider, Pascal; Payan, Yohan

    2005-08-01

    This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element. This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).

  11. A prototype methodology combining surface-enhanced laser desorption/ionization protein chip technology and artificial neural network algorithms to predict the chemoresponsiveness of breast cancer cell lines exposed to Paclitaxel and Doxorubicin under in vitro conditions.

    PubMed

    Mian, Shahid; Ball, Graham; Hornbuckle, Jo; Holding, Finn; Carmichael, James; Ellis, Ian; Ali, Selman; Li, Geng; McArdle, Stephanie; Creaser, Colin; Rees, Robert

    2003-09-01

    An ability to predict the likelihood of cellular response towards particular chemotherapeutic agents based upon protein expression patterns could facilitate the identification of biological molecules with previously undefined roles in the process of chemoresistance/chemosensitivity, and if robust enough these patterns might also be exploited towards the development of novel predictive assays. To ascertain whether proteomic based molecular profiling in conjunction with artificial neural network (ANN) algorithms could be applied towards the specific recognition of phenotypic patterns between either control or drug treated and chemosensitive or chemoresistant cellular populations, a combined approach involving MALDI-TOF matrix-assisted laser desorption/ionization-time of flight mass spectrometry, Ciphergen protein chip technology and ANN algorithms have been applied to specifically identify proteomic 'fingerprints' indicative of treatment regimen for chemosensitive (MCF-7, T47D) and chemoresistant (MCF-7/ADR) breast cancer cell lines following exposure to Doxorubicin or Paclitaxel. The results indicate that proteomic patterns can be identified by ANN algorithms to correctly assign 'class' for treatment regimen (e.g. control/drug treated or chemosensitive/chemoresistant) with a high degree of accuracy using boot-strap statistical validation techniques and that biomarker ion patterns indicative of response/non-response phenotypes are associated with MCF-7 and MCF-7/ADR cells exposed to Doxorubicin. We have also examined the predictive capability of this approach towards MCF-7 and T47D cells to ascertain whether prediction could be made based upon treatment regimen irrespective of cell lineage. Models were identified that could correctly assign class (control or Paclitaxel treatment) for 35/38 samples of an independent dataset. A similar level of predictive capability was also found (> 92%; n = 28) when proteomic patterns derived from the drug resistant cell line MCF-7/ADR were compared against those derived from MCF-7 and T47D as a model system of drug resistant and drug sensitive phenotypes. This approach might offer a potential methodology for predicting the biological behaviour of cancer cells towards particular chemotherapeutics and through protein isolation and sequence identification could result in the identification of biological molecules associated with chemosensitive/chemoresistance tumour phenotypes.

  12. Some methodological aspects of tectonic stress reconstruction based on geological indicators

    NASA Astrophysics Data System (ADS)

    Sim, Lidiya A.

    2012-03-01

    The impact of the initial heterogeneity in rocks on the distribution of slickensides (slip planes), which are used to reconstruct local stress-state, is discussed. The stress-state was reconstructed by a graphic variant of the cinematic method. A new type of stress-state - Variation of Stress-State Type (VSST) - is introduced. The VSST is characterized by dislocation vectors originated both by uniaxial compression and uniaxial tension in the same rock volume, i.e. coefficient μσ = 1-2R varies from +1 to -1. The reasons for differences between slip planes from those predicted by the model are discussed. The distribution of slip planes depends on the stress-state type (μσ coefficient) and on the initial heterogeneity of the rocks. A criterion for the identification of tectonic stress rank is suggested. It is based on the models of tectonic stress distribution in the fracture vicinities. Examples of results of tectonic stress studies are given. Studies of tectonic stresses based on geological indicators are of methodological and practical importance.

  13. Response Surface Methodology for the Optimization of Preparation of Biocomposites Based on Poly(lactic acid) and Durian Peel Cellulose

    PubMed Central

    Penjumras, Patpen; Abdul Rahman, Russly; Talib, Rosnita A.; Abdan, Khalina

    2015-01-01

    Response surface methodology was used to optimize preparation of biocomposites based on poly(lactic acid) and durian peel cellulose. The effects of cellulose loading, mixing temperature, and mixing time on tensile strength and impact strength were investigated. A central composite design was employed to determine the optimum preparation condition of the biocomposites to obtain the highest tensile strength and impact strength. A second-order polynomial model was developed for predicting the tensile strength and impact strength based on the composite design. It was found that composites were best fit by a quadratic regression model with high coefficient of determination (R 2) value. The selected optimum condition was 35 wt.% cellulose loading at 165°C and 15 min of mixing, leading to a desirability of 94.6%. Under the optimum condition, the tensile strength and impact strength of the biocomposites were 46.207 MPa and 2.931 kJ/m2, respectively. PMID:26167523

  14. Risk prediction model: Statistical and artificial neural network approach

    NASA Astrophysics Data System (ADS)

    Paiman, Nuur Azreen; Hariri, Azian; Masood, Ibrahim

    2017-04-01

    Prediction models are increasingly gaining popularity and had been used in numerous areas of studies to complement and fulfilled clinical reasoning and decision making nowadays. The adoption of such models assist physician's decision making, individual's behavior, and consequently improve individual outcomes and the cost-effectiveness of care. The objective of this paper is to reviewed articles related to risk prediction model in order to understand the suitable approach, development and the validation process of risk prediction model. A qualitative review of the aims, methods and significant main outcomes of the nineteen published articles that developed risk prediction models from numerous fields were done. This paper also reviewed on how researchers develop and validate the risk prediction models based on statistical and artificial neural network approach. From the review done, some methodological recommendation in developing and validating the prediction model were highlighted. According to studies that had been done, artificial neural network approached in developing the prediction model were more accurate compared to statistical approach. However currently, only limited published literature discussed on which approach is more accurate for risk prediction model development.

  15. Predicting survival in critical patients by use of body temperature regularity measurement based on approximate entropy.

    PubMed

    Cuesta, D; Varela, M; Miró, P; Galdós, P; Abásolo, D; Hornero, R; Aboy, M

    2007-07-01

    Body temperature is a classical diagnostic tool for a number of diseases. However, it is usually employed as a plain binary classification function (febrile or not febrile), and therefore its diagnostic power has not been fully developed. In this paper, we describe how body temperature regularity can be used for diagnosis. Our proposed methodology is based on obtaining accurate long-term temperature recordings at high sampling frequencies and analyzing the temperature signal using a regularity metric (approximate entropy). In this study, we assessed our methodology using temperature registers acquired from patients with multiple organ failure admitted to an intensive care unit. Our results indicate there is a correlation between the patient's condition and the regularity of the body temperature. This finding enabled us to design a classifier for two outcomes (survival or death) and test it on a dataset including 36 subjects. The classifier achieved an accuracy of 72%.

  16. Control and communication co-design: analysis and practice on performance improvement in distributed measurement and control system based on fieldbus and Ethernet.

    PubMed

    Liang, Geng

    2015-01-01

    In this paper, improving control performance of a networked control system by reducing DTD in a different perspective was investigated. Two different network architectures for system implementation were presented. Analysis and improvement dealing with DTD for the experimental control system were expounded. Effects of control scheme configuration on DTD in the form of FB were investigated and corresponding improvements by reallocation of FB and re-arrangement of schedule table are proposed. Issues of DTD in hybrid network were investigated and corresponding approaches to improve performance including (1) reducing DTD in PLC or PAC by way of IEC61499 and (2) cascade Smith predictive control with BPNN-based identification were proposed and investigated. Control effects under the proposed methodologies were also given. Experimental and field practices validated these methodologies. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Stereotype Threat and College Academic Performance: A Latent Variables Approach*

    PubMed Central

    Owens, Jayanti; Massey, Douglas S.

    2013-01-01

    Stereotype threat theory has gained experimental and survey-based support in helping explain the academic underperformance of minority students at selective colleges and universities. Stereotype threat theory states that minority students underperform because of pressures created by negative stereotypes about their racial group. Past survey-based studies, however, are characterized by methodological inefficiencies and potential biases: key theoretical constructs have only been measured using summed indicators and predicted relationships modeled using ordinary least squares. Using the National Longitudinal Survey of Freshman, this study overcomes previous methodological shortcomings by developing a latent construct model of stereotype threat. Theoretical constructs and equations are estimated simultaneously from multiple indicators, yielding a more reliable, valid, and parsimonious test of key propositions. Findings additionally support the view that social stigma can indeed have strong negative effects on the academic performance of pejoratively stereotyped racial-minority group members, not only in laboratory settings, but also in the real world. PMID:23950616

  18. In-line monitoring of the coffee roasting process with near infrared spectroscopy: Measurement of sucrose and colour.

    PubMed

    Santos, João Rodrigo; Viegas, Olga; Páscoa, Ricardo N M J; Ferreira, Isabel M P L V O; Rangel, António O S S; Lopes, João Almeida

    2016-10-01

    In this work, a real-time and in-situ analytical tool based on near infrared spectroscopy is proposed to predict two of the most relevant coffee parameters during the roasting process, sucrose and colour. The methodology was developed taking in consideration different coffee varieties (Arabica and Robusta), coffee origins (Brazil, East-Timor, India and Uganda) and roasting process procedures (slow and fast). All near infrared spectroscopy-based calibrations were developed resorting to partial least squares regression. The results proved the suitability of this methodology as demonstrated by range-error-ratio and coefficient of determination higher than 10 and 0.85 respectively, for all modelled parameters. The relationship between sucrose and colour development during the roasting process is further discussed, in light of designing in real-time coffee products with similar visual appearance and distinct organoleptic profile. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Challenges in Rotorcraft Acoustic Flight Prediction and Validation

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2003-01-01

    Challenges associated with rotorcraft acoustic flight prediction and validation are examined. First, an outline of a state-of-the-art rotorcraft aeroacoustic prediction methodology is presented. Components including rotorcraft aeromechanics, high resolution reconstruction, and rotorcraft acoustic prediction arc discussed. Next, to illustrate challenges and issues involved, a case study is presented in which an analysis of flight data from a specific XV-15 tiltrotor acoustic flight test is discussed in detail. Issues related to validation of methodologies using flight test data are discussed. Primary flight parameters such as velocity, altitude, and attitude are discussed and compared for repeated flight conditions. Other measured steady state flight conditions are examined for consistency and steadiness. A representative example prediction is presented and suggestions are made for future research.

  20. Design of experiments-based monitoring of critical quality attributes for the spray-drying process of insulin by NIR spectroscopy.

    PubMed

    Maltesen, Morten Jonas; van de Weert, Marco; Grohganz, Holger

    2012-09-01

    Moisture content and aerodynamic particle size are critical quality attributes for spray-dried protein formulations. In this study, spray-dried insulin powders intended for pulmonary delivery were produced applying design of experiments methodology. Near infrared spectroscopy (NIR) in combination with preprocessing and multivariate analysis in the form of partial least squares projections to latent structures (PLS) were used to correlate the spectral data with moisture content and aerodynamic particle size measured by a time of flight principle. PLS models predicting the moisture content were based on the chemical information of the water molecules in the NIR spectrum. Models yielded prediction errors (RMSEP) between 0.39% and 0.48% with thermal gravimetric analysis used as reference method. The PLS models predicting the aerodynamic particle size were based on baseline offset in the NIR spectra and yielded prediction errors between 0.27 and 0.48 μm. The morphology of the spray-dried particles had a significant impact on the predictive ability of the models. Good predictive models could be obtained for spherical particles with a calibration error (RMSECV) of 0.22 μm, whereas wrinkled particles resulted in much less robust models with a Q (2) of 0.69. Based on the results in this study, NIR is a suitable tool for process analysis of the spray-drying process and for control of moisture content and particle size, in particular for smooth and spherical particles.

Top