Some considerations on the use of ecological models to predict species' geographic distributions
Peterjohn, B.G.
2001-01-01
Peterson (2001) used Genetic Algorithm for Rule-set Prediction (GARP) models to predict distribution patterns from Breeding Bird Survey (BBS) data. Evaluations of these models should consider inherent limitations of BBS data: (1) BBS methods may not sample species and habitats equally; (2) using BBS data for both model development and testing may overlook poor fit of some models; and (3) BBS data may not provide the desired spatial resolution or capture temporal changes in species distributions. The predictive value of GARP models requires additional study, especially comparisons with distribution patterns from independent data sets. When employed at appropriate temporal and geographic scales, GARP models show considerable promise for conservation biology applications but provide limited inferences concerning processes responsible for the observed patterns.
Considerations of the Use of 3-D Geophysical Models to Predict Test Ban Monitoring Observables
2007-09-01
predict first P arrival times. Since this is a 3-D model, the travel times are predicted with a 3-D finite-difference code solving the eikonal equations...for the eikonal wave equation should provide more accurate predictions of travel-time from 3D models. These techniques and others are being
Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.
2009-01-01
An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.
Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...
Mathematical model for predicting human vertebral fracture
NASA Technical Reports Server (NTRS)
Benedict, J. V.
1973-01-01
Mathematical model has been constructed to predict dynamic response of tapered, curved beam columns in as much as human spine closely resembles this form. Model takes into consideration effects of impact force, mass distribution, and material properties. Solutions were verified by dynamic tests on curved, tapered, elastic polyethylene beam.
Experimental Evaluation of Balance Prediction Models for Sit-to-Stand Movement in the Sagittal Plane
Pena Cabra, Oscar David; Watanabe, Takashi
2013-01-01
Evaluation of balance control ability would become important in the rehabilitation training. In this paper, in order to make clear usefulness and limitation of a traditional simple inverted pendulum model in balance prediction in sit-to-stand movements, the traditional simple model was compared to an inertia (rotational radius) variable inverted pendulum model including multiple-joint influence in the balance predictions. The predictions were tested upon experimentation with six healthy subjects. The evaluation showed that the multiple-joint influence model is more accurate in predicting balance under demanding sit-to-stand conditions. On the other hand, the evaluation also showed that the traditionally used simple inverted pendulum model is still reliable in predicting balance during sit-to-stand movement under non-demanding (normal) condition. Especially, the simple model was shown to be effective for sit-to-stand movements with low center of mass velocity at the seat-off. Moreover, almost all trajectories under the normal condition seemed to follow the same control strategy, in which the subjects used extra energy than the minimum one necessary for standing up. This suggests that the safety considerations come first than the energy efficiency considerations during a sit to stand, since the most energy efficient trajectory is close to the backward fall boundary. PMID:24187580
The dynamics of learning about a climate threshold
NASA Astrophysics Data System (ADS)
Keller, Klaus; McInerney, David
2008-02-01
Anthropogenic greenhouse gas emissions may trigger threshold responses of the climate system. One relevant example of such a potential threshold response is a shutdown of the North Atlantic meridional overturning circulation (MOC). Numerous studies have analyzed the problem of early MOC change detection (i.e., detection before the forcing has committed the system to a threshold response). Here we analyze the early MOC prediction problem. To this end, we virtually deploy an MOC observation system into a simple model that mimics potential future MOC responses and analyze the timing of confident detection and prediction. Our analysis suggests that a confident prediction of a potential threshold response can require century time scales, considerably longer that the time required for confident detection. The signal enabling early prediction of an approaching MOC threshold in our model study is associated with the rate at which the MOC intensity decreases for a given forcing. A faster MOC weakening implies a higher MOC sensitivity to forcing. An MOC sensitivity exceeding a critical level results in a threshold response. Determining whether an observed MOC trend in our model differs in a statistically significant way from an unforced scenario (the detection problem) imposes lower requirements on an observation system than the determination whether the MOC will shut down in the future (the prediction problem). As a result, the virtual observation systems designed in our model for early detection of MOC changes might well fail at the task of early and confident prediction. Transferring this conclusion to the real world requires a considerably refined MOC model, as well as a more complete consideration of relevant observational constraints.
Biomechanics of injury prediction for anthropomorphic manikins - preliminary design considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engin, A.E.
1996-12-31
The anthropomorphic manikins are used in automobile safety research as well as in aerospace related applications. There is now a strong need to advance the biomechanics knowledge to determine appropriate criteria for injury likelihood prediction as functions of manikin-measured responses. In this paper, three regions of a manikin, namely, the head, knee joint, and lumbar spine are taken as examples to introduce preliminary design considerations for injury prediction by means of responses of theoretical models and strategically placed sensing devices.
Comparison of time series models for predicting campylobacteriosis risk in New Zealand.
Al-Sakkaf, A; Jones, G
2014-05-01
Predicting campylobacteriosis cases is a matter of considerable concern in New Zealand, after the number of the notified cases was the highest among the developed countries in 2006. Thus, there is a need to develop a model or a tool to predict accurately the number of campylobacteriosis cases as the Microbial Risk Assessment Model used to predict the number of campylobacteriosis cases failed to predict accurately the number of actual cases. We explore the appropriateness of classical time series modelling approaches for predicting campylobacteriosis. Finding the most appropriate time series model for New Zealand data has additional practical considerations given a possible structural change, that is, a specific and sudden change in response to the implemented interventions. A univariate methodological approach was used to predict monthly disease cases using New Zealand surveillance data of campylobacteriosis incidence from 1998 to 2009. The data from the years 1998 to 2008 were used to model the time series with the year 2009 held out of the data set for model validation. The best two models were then fitted to the full 1998-2009 data and used to predict for each month of 2010. The Holt-Winters (multiplicative) and ARIMA (additive) intervention models were considered the best models for predicting campylobacteriosis in New Zealand. It was noticed that the prediction by an additive ARIMA with intervention was slightly better than the prediction by a Holt-Winter multiplicative method for the annual total in year 2010, the former predicting only 23 cases less than the actual reported cases. It is confirmed that classical time series techniques such as ARIMA with intervention and Holt-Winters can provide a good prediction performance for campylobacteriosis risk in New Zealand. The results reported by this study are useful to the New Zealand Health and Safety Authority's efforts in addressing the problem of the campylobacteriosis epidemic. © 2013 Blackwell Verlag GmbH.
SVM-Based System for Prediction of Epileptic Seizures from iEEG Signal
Cherkassky, Vladimir; Lee, Jieun; Veber, Brandon; Patterson, Edward E.; Brinkmann, Benjamin H.; Worrell, Gregory A.
2017-01-01
Objective This paper describes a data-analytic modeling approach for prediction of epileptic seizures from intracranial electroencephalogram (iEEG) recording of brain activity. Even though it is widely accepted that statistical characteristics of iEEG signal change prior to seizures, robust seizure prediction remains a challenging problem due to subject-specific nature of data-analytic modeling. Methods Our work emphasizes understanding of clinical considerations important for iEEG-based seizure prediction, and proper translation of these clinical considerations into data-analytic modeling assumptions. Several design choices during pre-processing and post-processing are considered and investigated for their effect on seizure prediction accuracy. Results Our empirical results show that the proposed SVM-based seizure prediction system can achieve robust prediction of preictal and interictal iEEG segments from dogs with epilepsy. The sensitivity is about 90–100%, and the false-positive rate is about 0–0.3 times per day. The results also suggest good prediction is subject-specific (dog or human), in agreement with earlier studies. Conclusion Good prediction performance is possible only if the training data contain sufficiently many seizure episodes, i.e., at least 5–7 seizures. Significance The proposed system uses subject-specific modeling and unbalanced training data. This system also utilizes three different time scales during training and testing stages. PMID:27362758
Evaluation of a habitat capability model for nongame birds in the Black Hills, South Dakota
Todd R. Mills; Mark A. Rumble; Lester D. Flake
1996-01-01
Habitat models, used to predict consequences of land management decisions on wildlife, can have considerable economic effect on management decisions. The Black Hills National Forest uses such a habitat capability model (HABCAP), but its accuracy is largely unknown. We tested this modelâs predictive accuracy for nongame birds in 13 vegetative structural stages of...
Comparing predictions of extinction risk using models and subjective judgement
NASA Astrophysics Data System (ADS)
McCarthy, Michael A.; Keith, David; Tietjen, Justine; Burgman, Mark A.; Maunder, Mark; Master, Larry; Brook, Barry W.; Mace, Georgina; Possingham, Hugh P.; Medellin, Rodrigo; Andelman, Sandy; Regan, Helen; Regan, Tracey; Ruckelshaus, Mary
2004-10-01
Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models.
Prediction in processing is a by-product of language learning.
Chang, Franklin; Kidd, Evan; Rowland, Caroline F
2013-08-01
Both children and adults predict the content of upcoming language, suggesting that prediction is useful for learning as well as processing. We present an alternative model which can explain prediction behaviour as a by-product of language learning. We suggest that a consideration of language acquisition places important constraints on Pickering & Garrod's (P&G's) theory.
Mulhearn, Tyler J; Watts, Logan L; Todd, E Michelle; Medeiros, Kelsey E; Connelly, Shane; Mumford, Michael D
2017-01-01
Although recent evidence suggests ethics education can be effective, the nature of specific training programs, and their effectiveness, varies considerably. Building on a recent path modeling effort, the present study developed and validated a predictive modeling tool for responsible conduct of research education. The predictive modeling tool allows users to enter ratings in relation to a given ethics training program and receive instantaneous evaluative information for course refinement. Validation work suggests the tool's predicted outcomes correlate strongly (r = 0.46) with objective course outcomes. Implications for training program development and refinement are discussed.
Achieving Maximum Crack Remediation Effect from Optimized Hydrotesting
DOT National Transportation Integrated Search
2011-06-15
This project developed and validated models that will allow the industry to predict the overall benefits of hydrotests. Such a prediction is made with a consideration of various characteristics of a pipeline including the type of operation, stage of ...
Application of an Integrated HPC Reliability Prediction Framework to HMMWV Suspension System
2010-09-13
model number M966 (TOW Missle Carrier, Basic Armor without weapons), since they were available. Tires used for all simulations were the bias-type...vehicle fleet, including consideration of all kinds of uncertainty, especially including model uncertainty. The end result will be a tool to use...building an adequate vehicle reliability prediction framework for military vehicles is the accurate modeling of the integration of various types of
Perotte, Adler; Ranganath, Rajesh; Hirsch, Jamie S; Blei, David; Elhadad, Noémie
2015-07-01
As adoption of electronic health records continues to increase, there is an opportunity to incorporate clinical documentation as well as laboratory values and demographics into risk prediction modeling. The authors develop a risk prediction model for chronic kidney disease (CKD) progression from stage III to stage IV that includes longitudinal data and features drawn from clinical documentation. The study cohort consisted of 2908 primary-care clinic patients who had at least three visits prior to January 1, 2013 and developed CKD stage III during their documented history. Development and validation cohorts were randomly selected from this cohort and the study datasets included longitudinal inpatient and outpatient data from these populations. Time series analysis (Kalman filter) and survival analysis (Cox proportional hazards) were combined to produce a range of risk models. These models were evaluated using concordance, a discriminatory statistic. A risk model incorporating longitudinal data on clinical documentation and laboratory test results (concordance 0.849) predicts progression from state III CKD to stage IV CKD more accurately when compared to a similar model without laboratory test results (concordance 0.733, P<.001), a model that only considers the most recent laboratory test results (concordance 0.819, P < .031) and a model based on estimated glomerular filtration rate (concordance 0.779, P < .001). A risk prediction model that takes longitudinal laboratory test results and clinical documentation into consideration can predict CKD progression from stage III to stage IV more accurately than three models that do not take all of these variables into consideration. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures
NASA Astrophysics Data System (ADS)
Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin
2018-07-01
Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.
Eco-genetic modeling of contemporary life-history evolution.
Dunlop, Erin S; Heino, Mikko; Dieckmann, Ulf
2009-10-01
We present eco-genetic modeling as a flexible tool for exploring the course and rates of multi-trait life-history evolution in natural populations. We build on existing modeling approaches by combining features that facilitate studying the ecological and evolutionary dynamics of realistically structured populations. In particular, the joint consideration of age and size structure enables the analysis of phenotypically plastic populations with more than a single growth trajectory, and ecological feedback is readily included in the form of density dependence and frequency dependence. Stochasticity and life-history trade-offs can also be implemented. Critically, eco-genetic models permit the incorporation of salient genetic detail such as a population's genetic variances and covariances and the corresponding heritabilities, as well as the probabilistic inheritance and phenotypic expression of quantitative traits. These inclusions are crucial for predicting rates of evolutionary change on both contemporary and longer timescales. An eco-genetic model can be tightly coupled with empirical data and therefore may have considerable practical relevance, in terms of generating testable predictions and evaluating alternative management measures. To illustrate the utility of these models, we present as an example an eco-genetic model used to study harvest-induced evolution of multiple traits in Atlantic cod. The predictions of our model (most notably that harvesting induces a genetic reduction in age and size at maturation, an increase or decrease in growth capacity depending on the minimum-length limit, and an increase in reproductive investment) are corroborated by patterns observed in wild populations. The predicted genetic changes occur together with plastic changes that could phenotypically mask the former. Importantly, our analysis predicts that evolutionary changes show little signs of reversal following a harvest moratorium. This illustrates how predictions offered by eco-genetic models can enable and guide evolutionarily sustainable resource management.
DOT National Transportation Integrated Search
1976-04-30
A simple and a more detailed mathematical model for the simulation of train collisions are presented. The study presents considerable insight as to the causes and consequences of train motions on impact. Comparison of model predictions with two full ...
PSO-MISMO modeling strategy for multistep-ahead time series prediction.
Bao, Yukun; Xiong, Tao; Hu, Zhongyi
2014-05-01
Multistep-ahead time series prediction is one of the most challenging research topics in the field of time series modeling and prediction, and is continually under research. Recently, the multiple-input several multiple-outputs (MISMO) modeling strategy has been proposed as a promising alternative for multistep-ahead time series prediction, exhibiting advantages compared with the two currently dominating strategies, the iterated and the direct strategies. Built on the established MISMO strategy, this paper proposes a particle swarm optimization (PSO)-based MISMO modeling strategy, which is capable of determining the number of sub-models in a self-adaptive mode, with varying prediction horizons. Rather than deriving crisp divides with equal-size s prediction horizons from the established MISMO, the proposed PSO-MISMO strategy, implemented with neural networks, employs a heuristic to create flexible divides with varying sizes of prediction horizons and to generate corresponding sub-models, providing considerable flexibility in model construction, which has been validated with simulated and real datasets.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Prediction of porosity of food materials during drying: Current challenges and directions.
Joardder, Mohammad U H; Kumar, C; Karim, M A
2017-07-18
Pore formation in food samples is a common physical phenomenon observed during dehydration processes. The pore evolution during drying significantly affects the physical properties and quality of dried foods. Therefore, it should be taken into consideration when predicting transport processes in the drying sample. Characteristics of pore formation depend on the drying process parameters, product properties and processing time. Understanding the physics of pore formation and evolution during drying will assist in accurately predicting the drying kinetics and quality of food materials. Researchers have been trying to develop mathematical models to describe the pore formation and evolution during drying. In this study, existing porosity models are critically analysed and limitations are identified. Better insight into the factors affecting porosity is provided, and suggestions are proposed to overcome the limitations. These include considerations of process parameters such as glass transition temperature, sample temperature, and variable material properties in the porosity models. Several researchers have proposed models for porosity prediction of food materials during drying. However, these models are either very simplistic or empirical in nature and failed to consider relevant significant factors that influence porosity. In-depth understanding of characteristics of the pore is required for developing a generic model of porosity. A micro-level analysis of pore formation is presented for better understanding, which will help in developing an accurate and generic porosity model.
Cañadas, P; Laurent, V M; Chabrand, P; Isabey, D; Wendling-Mansuy, S
2003-11-01
The visco-elastic properties of living cells, measured to date by various authors, vary considerably, depending on the experimental methods and/or on the theoretical models used. In the present study, two mechanisms thought to be involved in cellular visco-elastic responses were analysed, based on the idea that the cytoskeleton plays a fundamental role in cellular mechanical responses. For this purpose, the predictions of an open unit-cell model and a 30-element visco-elastic tensegrity model were tested, taking into consideration similar properties of the constitutive F-actin. The quantitative predictions of the time constant and viscosity modulus obtained by both models were compared with previously published experimental data obtained from living cells. The small viscosity modulus values (10(0)-10(3) Pa x s) predicted by the tensegrity model may reflect the combined contributions of the spatially rearranged constitutive filaments and the internal tension to the overall cytoskeleton response to external loading. In contrast, the high viscosity modulus values (10(3)-10(5) Pa x s) predicted by the unit-cell model may rather reflect the mechanical response of the cytoskeleton to the bending of the constitutive filaments and/or to the deformation of internal components. The present results suggest the existence of a close link between the overall visco-elastic response of micromanipulated cells and the underlying architecture.
Kast, Karin; Schmutzler, Rita K; Rhiem, Kerstin; Kiechle, Marion; Fischer, Christine; Niederacher, Dieter; Arnold, Norbert; Grimm, Tiemo; Speiser, Dorothee; Schlegelberger, Brigitte; Varga, Dominic; Horvath, Judit; Beer, Marit; Briest, Susanne; Meindl, Alfons; Engel, Christoph
2014-11-15
The Manchester scoring system (MSS) allows the calculation of the probability for the presence of mutations in BRCA1 or BRCA2 genes in families suspected of having hereditary breast and ovarian cancer. In 9,390 families, we determined the predictive performance of the MSS without (MSS-2004) and with (MSS-2009) consideration of pathology parameters. Moreover, we validated a recalibrated version of the MSS-2009 (MSS-recal). Families were included in the registry of the German Consortium for Hereditary Breast and Ovarian Cancer, using defined clinical criteria. Receiver operating characteristics (ROC) analysis was used to determine the predictive performance. The recalibrated model was developed using logistic regression analysis and tested using an independent random validation sample. The area under the ROC curves regarding a mutation in any of the two BRCA genes was 0.77 (95%CI 0.75-0.79) for MSS-2004, 0.80 (95%CI 0.78-0.82) for MSS-2009, and 0.82 (95%CI 0.80-0.83) for MSS-recal. Sensitivity at the 10% mutation probability cutoff was similar for all three models (MSS-2004 92.2%, MSS-2009 92.2%, and MSS-recal 90.3%), but specificity of MSS-recal (46.0%) was considerably higher than that of MSS-2004 (25.4%) and MSS-2009 (32.3%). In the MSS-recal model, almost all predictors of the original MSS were significantly predictive. However, the score values of some predictors, for example, high grade triple negative breast cancers, differed considerably from the originally proposed score values. The original MSS performed well in our sample of high risk families. The use of pathological parameters increased the predictive performance significantly. Recalibration improved the specificity considerably without losing much sensitivity. © 2014 UICC.
Stochastic Modeling and Global Warming Trend Extraction For Ocean Acoustic Travel Times.
1995-01-06
consideration and that these models can not currently be relied upon by themselves to predict global warming . Experimental data is most certainly needed, not...only to measure global warming itself, but to help improve the ocean model themselves. (AN)
Comparing flood loss models of different complexity
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
NASA Astrophysics Data System (ADS)
Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.
2016-01-01
In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.
Mechanistic modeling of developmental defects through computational embryology (WC10th)
Abstract: An important consideration for 3Rs is to identify developmental hazards utilizing mechanism-based in vitro assays (e.g., ToxCast) and in silico predictive models. Steady progress has been made with agent-based models that recapitulate morphogenetic drivers for angiogen...
NASA Astrophysics Data System (ADS)
Hustim, M.; Arifin, Z.; Aly, S. H.; Ramli, M. I.; Zakaria, R.; Liputo, A.
2018-04-01
This research aimed to predict the noise produced by the traffic in the road network in Makassar City using ASJ-RTN Model 2008 by calculating the horn sound. Observations were taken at 37 survey points on road side. The observations were conducted at 06.00 - 18.00 and 06.00 - 21.00 which research objects were motorcycle (MC), light vehicle (LV) and heavy vehicle (HV). The observed data were traffic volume, vehicle speed, number of horn and traffic noise using Sound Level Meter Tenmars TM-103. The research result indicates that prediction noise model by calculating the horn sound produces the average noise level value of 78.5 dB having the Pearson’s correlation and RMSE of 0.95 and 0.87. Therefore, ASJ-RTN Model 2008 prediction model by calculating the horn sound is said to be sufficiently good for predicting noise level.
Tree-based flood damage modeling of companies: Damage processes and model performance
NASA Astrophysics Data System (ADS)
Sieg, Tobias; Vogel, Kristin; Merz, Bruno; Kreibich, Heidi
2017-07-01
Reliable flood risk analyses, including the estimation of damage, are an important prerequisite for efficient risk management. However, not much is known about flood damage processes affecting companies. Thus, we conduct a flood damage assessment of companies in Germany with regard to two aspects. First, we identify relevant damage-influencing variables. Second, we assess the prediction performance of the developed damage models with respect to the gain by using an increasing amount of training data and a sector-specific evaluation of the data. Random forests are trained with data from two postevent surveys after flood events occurring in the years 2002 and 2013. For a sector-specific consideration, the data set is split into four subsets corresponding to the manufacturing, commercial, financial, and service sectors. Further, separate models are derived for three different company assets: buildings, equipment, and goods and stock. Calculated variable importance values reveal different variable sets relevant for the damage estimation, indicating significant differences in the damage process for various company sectors and assets. With an increasing number of data used to build the models, prediction errors decrease. Yet the effect is rather small and seems to saturate for a data set size of several hundred observations. In contrast, the prediction improvement achieved by a sector-specific consideration is more distinct, especially for damage to equipment and goods and stock. Consequently, sector-specific data acquisition and a consideration of sector-specific company characteristics in future flood damage assessments is expected to improve the model performance more than a mere increase in data.
Statistical considerations on prognostic models for glioma
Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.
2016-01-01
Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835
Monte Carlo Simulation of Microscopic Stock Market Models
NASA Astrophysics Data System (ADS)
Stauffer, Dietrich
Computer simulations with random numbers, that is, Monte Carlo methods, have been considerably applied in recent years to model the fluctuations of stock market or currency exchange rates. Here we concentrate on the percolation model of Cont and Bouchaud, to simulate, not to predict, the market behavior.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models.
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance.
Predicting Market Impact Costs Using Nonparametric Machine Learning Models
Park, Saerom; Lee, Jaewook; Son, Youngdoo
2016-01-01
Market impact cost is the most significant portion of implicit transaction costs that can reduce the overall transaction cost, although it cannot be measured directly. In this paper, we employed the state-of-the-art nonparametric machine learning models: neural networks, Bayesian neural network, Gaussian process, and support vector regression, to predict market impact cost accurately and to provide the predictive model that is versatile in the number of variables. We collected a large amount of real single transaction data of US stock market from Bloomberg Terminal and generated three independent input variables. As a result, most nonparametric machine learning models outperformed a-state-of-the-art benchmark parametric model such as I-star model in four error measures. Although these models encounter certain difficulties in separating the permanent and temporary cost directly, nonparametric machine learning models can be good alternatives in reducing transaction costs by considerably improving in prediction performance. PMID:26926235
Benchmarking novel approaches for modelling species range dynamics
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H.; Moore, Kara A.; Zimmermann, Niklaus E.
2016-01-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species’ range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species’ response to climate change but also emphasise several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. PMID:26872305
Benchmarking novel approaches for modelling species range dynamics.
Zurell, Damaris; Thuiller, Wilfried; Pagel, Jörn; Cabral, Juliano S; Münkemüller, Tamara; Gravel, Dominique; Dullinger, Stefan; Normand, Signe; Schiffers, Katja H; Moore, Kara A; Zimmermann, Niklaus E
2016-08-01
Increasing biodiversity loss due to climate change is one of the most vital challenges of the 21st century. To anticipate and mitigate biodiversity loss, models are needed that reliably project species' range dynamics and extinction risks. Recently, several new approaches to model range dynamics have been developed to supplement correlative species distribution models (SDMs), but applications clearly lag behind model development. Indeed, no comparative analysis has been performed to evaluate their performance. Here, we build on process-based, simulated data for benchmarking five range (dynamic) models of varying complexity including classical SDMs, SDMs coupled with simple dispersal or more complex population dynamic models (SDM hybrids), and a hierarchical Bayesian process-based dynamic range model (DRM). We specifically test the effects of demographic and community processes on model predictive performance. Under current climate, DRMs performed best, although only marginally. Under climate change, predictive performance varied considerably, with no clear winners. Yet, all range dynamic models improved predictions under climate change substantially compared to purely correlative SDMs, and the population dynamic models also predicted reasonable extinction risks for most scenarios. When benchmarking data were simulated with more complex demographic and community processes, simple SDM hybrids including only dispersal often proved most reliable. Finally, we found that structural decisions during model building can have great impact on model accuracy, but prior system knowledge on important processes can reduce these uncertainties considerably. Our results reassure the clear merit in using dynamic approaches for modelling species' response to climate change but also emphasize several needs for further model and data improvement. We propose and discuss perspectives for improving range projections through combination of multiple models and for making these approaches operational for large numbers of species. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Ogawa, Tatsuhiko; Hashimoto, Shintaro; Sato, Tatsuhiko; Niita, Koji
2014-06-01
A new nuclear de-excitation model, intended for accurate simulation of isomeric transition of excited nuclei, was incorporated into PHITS and applied to various situations to clarify the impact of the model. The case studies show that precise treatment of gamma de-excitation and consideration for isomer production are important for various applications such as detector performance prediction, radiation shielding calculations and the estimation of radioactive inventory including isomers.
FPGA implementation of predictive degradation model for engine oil lifetime
NASA Astrophysics Data System (ADS)
Idros, M. F. M.; Razak, A. H. A.; Junid, S. A. M. Al; Suliman, S. I.; Halim, A. K.
2018-03-01
This paper presents the implementation of linear regression model for degradation prediction on Register Transfer Logic (RTL) using QuartusII. A stationary model had been identified in the degradation trend for the engine oil in a vehicle in time series method. As for RTL implementation, the degradation model is written in Verilog HDL and the data input are taken at a certain time. Clock divider had been designed to support the timing sequence of input data. At every five data, a regression analysis is adapted for slope variation determination and prediction calculation. Here, only the negative value are taken as the consideration for the prediction purposes for less number of logic gate. Least Square Method is adapted to get the best linear model based on the mean values of time series data. The coded algorithm has been implemented on FPGA for validation purposes. The result shows the prediction time to change the engine oil.
Contaminant dispersal in bounded turbulent shear flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, J.M.; Bernard, P.S.; Chiang, K.F.
The dispersion of smoke downstream of a line source at the wall and at y{sup +} = 30 in a turbulent boundary layer has been predicted with a non-local model of the scalar fluxes {bar u}c and {bar v}c. The predicted plume from the wall source has been compared to high Schmidt number experimental measurements using a combination of hot-wire anemometry to obtain velocity component data synchronously with concentration data obtained optically. The predicted plumes from the source at y{sup +} = 30 and at the wall also have been compared to a low Schmidt number direct numerical simulation. Nearmore » the source, the non-local flux models give considerably better predictions than models which account solely for mean gradient transport. At a sufficient distance downstream the gradient models gives reasonably good predictions.« less
Consideration of VT5 etch-based OPC modeling
NASA Astrophysics Data System (ADS)
Lim, ChinTeong; Temchenko, Vlad; Kaiser, Dieter; Meusel, Ingo; Schmidt, Sebastian; Schneider, Jens; Niehoff, Martin
2008-03-01
Including etch-based empirical data during OPC model calibration is a desired yet controversial decision for OPC modeling, especially for process with a large litho to etch biasing. While many OPC software tools are capable of providing this functionality nowadays; yet few were implemented in manufacturing due to various risks considerations such as compromises in resist and optical effects prediction, etch model accuracy or even runtime concern. Conventional method of applying rule-based alongside resist model is popular but requires a lot of lengthy code generation to provide a leaner OPC input. This work discusses risk factors and their considerations, together with introduction of techniques used within Mentor Calibre VT5 etch-based modeling at sub 90nm technology node. Various strategies are discussed with the aim of better handling of large etch bias offset without adding complexity into final OPC package. Finally, results were presented to assess the advantages and limitations of the final method chosen.
Predicting lettuce canopy photosynthesis with statistical and neural network models
NASA Technical Reports Server (NTRS)
Frick, J.; Precetti, C.; Mitchell, C. A.
1998-01-01
An artificial neural network (NN) and a statistical regression model were developed to predict canopy photosynthetic rates (Pn) for 'Waldman's Green' leaf lettuce (Latuca sativa L.). All data used to develop and test the models were collected for crop stands grown hydroponically and under controlled-environment conditions. In the NN and regression models, canopy Pn was predicted as a function of three independent variables: shootzone CO2 concentration (600 to 1500 micromoles mol-1), photosynthetic photon flux (PPF) (600 to 1100 micromoles m-2 s-1), and canopy age (10 to 20 days after planting). The models were used to determine the combinations of CO2 and PPF setpoints required each day to maintain maximum canopy Pn. The statistical model (a third-order polynomial) predicted Pn more accurately than the simple NN (a three-layer, fully connected net). Over an 11-day validation period, average percent difference between predicted and actual Pn was 12.3% and 24.6% for the statistical and NN models, respectively. Both models lost considerable accuracy when used to determine relatively long-range Pn predictions (> or = 6 days into the future).
Spatial predictive mapping using artificial neural networks
NASA Astrophysics Data System (ADS)
Noack, S.; Knobloch, A.; Etzold, S. H.; Barth, A.; Kallmeier, E.
2014-11-01
The modelling or prediction of complex geospatial phenomena (like formation of geo-hazards) is one of the most important tasks for geoscientists. But in practice it faces various difficulties, caused mainly by the complexity of relationships between the phenomena itself and the controlling parameters, as well by limitations of our knowledge about the nature of physical/ mathematical relationships and by restrictions regarding accuracy and availability of data. In this situation methods of artificial intelligence, like artificial neural networks (ANN) offer a meaningful alternative modelling approach compared to the exact mathematical modelling. In the past, the application of ANN technologies in geosciences was primarily limited due to difficulties to integrate it into geo-data processing algorithms. In consideration of this background, the software advangeo® was developed to provide a normal GIS user with a powerful tool to use ANNs for prediction mapping and data preparation within his standard ESRI ArcGIS environment. In many case studies, such as land use planning, geo-hazards analysis and prevention, mineral potential mapping, agriculture & forestry advangeo® has shown its capabilities and strengths. The approach is able to add considerable value to existing data.
Comparison of simulator fidelity model predictions with in-simulator evaluation data
NASA Technical Reports Server (NTRS)
Parrish, R. V.; Mckissick, B. T.; Ashworth, B. R.
1983-01-01
A full factorial in simulator experiment of a single axis, multiloop, compensatory pitch tracking task is described. The experiment was conducted to provide data to validate extensions to an analytic, closed loop model of a real time digital simulation facility. The results of the experiment encompassing various simulation fidelity factors, such as visual delay, digital integration algorithms, computer iteration rates, control loading bandwidths and proprioceptive cues, and g-seat kinesthetic cues, are compared with predictions obtained from the analytic model incorporating an optimal control model of the human pilot. The in-simulator results demonstrate more sensitivity to the g-seat and to the control loader conditions than were predicted by the model. However, the model predictions are generally upheld, although the predicted magnitudes of the states and of the error terms are sometimes off considerably. Of particular concern is the large sensitivity difference for one control loader condition, as well as the model/in-simulator mismatch in the magnitude of the plant states when the other states match.
Systems Toxicology of Embryo Development (9th Copenhagen Workshop)
An important consideration for predictive toxicology is to identify developmental hazards utilizing mechanism-based in vitro assays (e.g., ToxCast) and in silico multiscale models. Steady progress has been made with agent-based models that recapitulate morphogenetic drivers for a...
PREDICTING ER BINDING AFFINITY FOR EDC RANKING AND PRIORITIZATION: MODEL I
A Common Reactivity Pattern (COREPA) model, based on consideration of multiple energetically reasonable conformations of flexible chemicals was developed using a training set of 232 rat estrogen receptor (rER) relative binding affinity (RBA) measurements. The training set include...
Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim
2017-06-01
As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Method for Predicting Manning Factors in Post Year 2000 Ships
1975-12-01
the automated condition. ij ;: Related to the problem of model validity is the consideration of the accuracy of the predictions. Linus Pauling ...described his use of the word "stochastic" in the April 1955 American Scientist. According to Pauling , the word is derived from a Greek stem which
Use of model-predicted “transference ratios” is currently under consideration by the US EPA in the formulation of a Secondary National Ambient Air Quality Standard for oxidized nitrogen and oxidized sulfur. This term is an empirical parameter defined for oxidized sulfur (TS)as th...
Predictive modeling of neuroanatomic structures for brain atrophy detection
NASA Astrophysics Data System (ADS)
Hu, Xintao; Guo, Lei; Nie, Jingxin; Li, Kaiming; Liu, Tianming
2010-03-01
In this paper, we present an approach of predictive modeling of neuroanatomic structures for the detection of brain atrophy based on cross-sectional MRI image. The underlying premise of applying predictive modeling for atrophy detection is that brain atrophy is defined as significant deviation of part of the anatomy from what the remaining normal anatomy predicts for that part. The steps of predictive modeling are as follows. The central cortical surface under consideration is reconstructed from brain tissue map and Regions of Interests (ROI) on it are predicted from other reliable anatomies. The vertex pair-wise distance between the predicted vertex and the true one within the abnormal region is expected to be larger than that of the vertex in normal brain region. Change of white matter/gray matter ratio within a spherical region is used to identify the direction of vertex displacement. In this way, the severity of brain atrophy can be defined quantitatively by the displacements of those vertices. The proposed predictive modeling method has been evaluated by using both simulated atrophies and MRI images of Alzheimer's disease.
Buckling Imperfection Sensitivity of Axially Compressed Orthotropic Cylinders
NASA Technical Reports Server (NTRS)
Schultz, Marc R.; Nemeth, Michael P.
2010-01-01
Structural stability is a major consideration in the design of lightweight shell structures. However, the theoretical predictions of geometrically perfect structures often considerably over predict the buckling loads of inherently imperfect real structures. It is reasonably well understood how the shell geometry affects the imperfection sensitivity of axially compressed cylindrical shells; however, the effects of shell anisotropy on the imperfection sensitivity is less well understood. In the present paper, the development of an analytical model for assessing the imperfection sensitivity of axially compressed orthotropic cylinders is discussed. Results from the analytical model for four shell designs are compared with those from a general-purpose finite-element code, and good qualitative agreement is found. Reasons for discrepancies are discussed, and potential design implications of this line of research are discussed.
Rethinking Indian monsoon rainfall prediction in the context of recent global warming
NASA Astrophysics Data System (ADS)
Wang, Bin; Xiang, Baoqiang; Li, Juan; Webster, Peter J.; Rajeevan, Madhavan N.; Liu, Jian; Ha, Kyung-Ja
2015-05-01
Prediction of Indian summer monsoon rainfall (ISMR) is at the heart of tropical climate prediction. Despite enormous progress having been made in predicting ISMR since 1886, the operational forecasts during recent decades (1989-2012) have little skill. Here we show, with both dynamical and physical-empirical models, that this recent failure is largely due to the models' inability to capture new predictability sources emerging during recent global warming, that is, the development of the central-Pacific El Nino-Southern Oscillation (CP-ENSO), the rapid deepening of the Asian Low and the strengthening of North and South Pacific Highs during boreal spring. A physical-empirical model that captures these new predictors can produce an independent forecast skill of 0.51 for 1989-2012 and a 92-year retrospective forecast skill of 0.64 for 1921-2012. The recent low skills of the dynamical models are attributed to deficiencies in capturing the developing CP-ENSO and anomalous Asian Low. The results reveal a considerable gap between ISMR prediction skill and predictability.
Modelling of pore coarsening in the high burn-up structure of UO2 fuel
NASA Astrophysics Data System (ADS)
Veshchunov, M. S.; Tarasov, V. I.
2017-05-01
The model for coalescence of randomly distributed immobile pores owing to their growth and impingement, applied by the authors earlier to consideration of the porosity evolution in the high burn-up structure (HBS) at the UO2 fuel pellet periphery (rim zone), was further developed and validated. Predictions of the original model, taking into consideration only binary impingements of growing immobile pores, qualitatively correctly describe the decrease of the pore number density with the increase of the fractional porosity, however notably underestimate the coalescence rate at high burn-ups attained in the outmost region of the rim zone. In order to overcome this discrepancy, the next approximation of the model taking into consideration triple impingements of growing pores was developed. The advanced model provides a reasonable consent with experimental data, thus demonstrating the validity of the proposed pore coarsening mechanism in the HBS.
NASA Astrophysics Data System (ADS)
Rasul, H.; Wu, M.; Olofsson, B.
2017-12-01
Modelling moisture and heat changes in road layers is very important to understand road hydrology and for better construction and maintenance of roads in a sustainable manner. In cold regions due to the freezing/thawing process in the partially saturated material of roads, the modeling task will become more complicated than simple model of flow through porous media without freezing/thawing pores considerations. This study is presenting a 2-D model simulation for a section of highway with considering freezing/thawing and vapor changes. Partial deferential equations (PDEs) are used in formulation of the model. Parameters are optimized from modelling results based on the measured data from test station on E18 highway near Stockholm. Impacts of phase change considerations in the modelling are assessed by comparing the modeled soil moisture with TDR-measured data. The results show that the model can be used for prediction of water and ice content in different layers of the road and at different seasons. Parameter sensitivities are analyzed by implementing a calibration strategy. In addition, the phase change consideration is evaluated in the modeling process, by comparing the PDE model with another model without considerations of freezing/thawing in roads. The PDE model shows high potential in understanding the moisture dynamics in the road system.
Language acquisition is model-based rather than model-free.
Wang, Felix Hao; Mintz, Toben H
2016-01-01
Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
2009-01-01
Background Feed composition has a large impact on the growth of animals, particularly marine fish. We have developed a quantitative dynamic model that can predict the growth and body composition of marine fish for a given feed composition over a timespan of several months. The model takes into consideration the effects of environmental factors, particularly temperature, on growth, and it incorporates detailed kinetics describing the main metabolic processes (protein, lipid, and central metabolism) known to play major roles in growth and body composition. Results For validation, we compared our model's predictions with the results of several experimental studies. We showed that the model gives reliable predictions of growth, nutrient utilization (including amino acid retention), and body composition over a timespan of several months, longer than most of the previously developed predictive models. Conclusion We demonstrate that, despite the difficulties involved, multiscale models in biology can yield reasonable and useful results. The model predictions are reliable over several timescales and in the presence of strong temperature fluctuations, which are crucial factors for modeling marine organism growth. The model provides important improvements over existing models. PMID:19903354
Beck, J D; Weintraub, J A; Disney, J A; Graves, R C; Stamm, J W; Kaste, L M; Bohannan, H M
1992-12-01
The purpose of this analysis is to compare three different statistical models for predicting children likely to be at risk of developing dental caries over a 3-yr period. Data are based on 4117 children who participated in the University of North Carolina Caries Risk Assessment Study, a longitudinal study conducted in the Aiken, South Carolina, and Portland, Maine areas. The three models differed with respect to either the types of variables included or the definition of disease outcome. The two "Prediction" models included both risk factor variables thought to cause dental caries and indicator variables that are associated with dental caries, but are not thought to be causal for the disease. The "Etiologic" model included only etiologic factors as variables. A dichotomous outcome measure--none or any 3-yr increment, was used in the "Any Risk Etiologic model" and the "Any Risk Prediction Model". Another outcome, based on a gradient measure of disease, was used in the "High Risk Prediction Model". The variables that are significant in these models vary across grades and sites, but are more consistent among the Etiologic model than the Predictor models. However, among the three sets of models, the Any Risk Prediction Models have the highest sensitivity and positive predictive values, whereas the High Risk Prediction Models have the highest specificity and negative predictive values. Considerations in determining model preference are discussed.
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network
Yu, Ying; Wang, Yirui; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient. PMID:28246527
Statistical Modeling and Prediction for Tourism Economy Using Dendritic Neural Network.
Yu, Ying; Wang, Yirui; Gao, Shangce; Tang, Zheng
2017-01-01
With the impact of global internationalization, tourism economy has also been a rapid development. The increasing interest aroused by more advanced forecasting methods leads us to innovate forecasting methods. In this paper, the seasonal trend autoregressive integrated moving averages with dendritic neural network model (SA-D model) is proposed to perform the tourism demand forecasting. First, we use the seasonal trend autoregressive integrated moving averages model (SARIMA model) to exclude the long-term linear trend and then train the residual data by the dendritic neural network model and make a short-term prediction. As the result showed in this paper, the SA-D model can achieve considerably better predictive performances. In order to demonstrate the effectiveness of the SA-D model, we also use the data that other authors used in the other models and compare the results. It also proved that the SA-D model achieved good predictive performances in terms of the normalized mean square error, absolute percentage of error, and correlation coefficient.
Development of wavelet-ANN models to predict water quality parameters in Hilo Bay, Pacific Ocean.
Alizadeh, Mohamad Javad; Kavianpour, Mohamad Reza
2015-09-15
The main objective of this study is to apply artificial neural network (ANN) and wavelet-neural network (WNN) models for predicting a variety of ocean water quality parameters. In this regard, several water quality parameters in Hilo Bay, Pacific Ocean, are taken under consideration. Different combinations of water quality parameters are applied as input variables to predict daily values of salinity, temperature and DO as well as hourly values of DO. The results demonstrate that the WNN models are superior to the ANN models. Also, the hourly models developed for DO prediction outperform the daily models of DO. For the daily models, the most accurate model has R equal to 0.96, while for the hourly model it reaches up to 0.98. Overall, the results show the ability of the model to monitor the ocean parameters, in condition with missing data, or when regular measurement and monitoring are impossible. Copyright © 2015 Elsevier Ltd. All rights reserved.
Large-area forest inventory regression modeling: spatial scale considerations
James A. Westfall
2015-01-01
In many forest inventories, statistical models are employed to predict values for attributes that are difficult and/or time-consuming to measure. In some applications, models are applied across a large geographic area, which assumes the relationship between the response variable and predictors is ubiquitously invariable within the area. The extent to which this...
NASA Astrophysics Data System (ADS)
Andoh, Masayoshi; Wada, Hiroshi
2004-07-01
The aim of this study was to predict the characteristics of two types of cochlear pressure waves, so-called fast and slow waves. A two-dimensional finite-element model of the organ of Corti (OC), including fluid-structure interaction with the surrounding lymph fluid, was constructed. The geometry of the OC at the basal turn was determined from morphological measurements of others in the gerbil hemicochlea. As far as mechanical properties of the materials within the OC are concerned, previously determined mechanical properties of portions within the OC were adopted, and unknown mechanical features were determined from the published measurements of static stiffness. Time advance of the fluid-structure scheme was achieved by a staggered approach. Using the model, the magnitude and phase of the fast and slow waves were predicted so as to fit the numerically obtained pressure distribution in the scala tympani with what is known about intracochlear pressure measurement. When the predicted pressure waves were applied to the model, the numerical result of the velocity of the basilar membrane showed good agreement with the experimentally obtained velocity of the basilar membrane documented by others. Thus, the predicted pressure waves appeared to be reliable. Moreover, it was found that the fluid-structure interaction considerably influences the dynamic behavior of the OC at frequencies near the characteristic frequency.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
NASA Astrophysics Data System (ADS)
Bombosch, Annette; Zitterbart, Daniel P.; Van Opzeeland, Ilse; Frickenhaus, Stephan; Burkhardt, Elke; Wisz, Mary S.; Boebel, Olaf
2014-09-01
Seismic surveys are frequently a matter of concern regarding their potentially negative impacts on marine mammals. In the Southern Ocean, which provides a critical habitat for several endangered cetacean species, seismic research activities are undertaken at a circumpolar scale. In order to minimize impacts of these surveys, pre-cruise planning requires detailed, spatio-temporally resolved knowledge on the likelihood of encountering these species in the survey area. In this publication we present predictive habitat modelling as a potential tool to support decisions for survey planning. We associated opportunistic sightings (2005-2011) of humpback (Megaptera novaeangliae, N=93) and Antarctic minke whales (Balaenoptera bonaerensis, N=139) with a range of static and dynamic environmental variables. A maximum entropy algorithm (Maxent) was used to develop habitat models and to calculate daily basinwide/circumpolar prediction maps to evaluate how species-specific habitat conditions evolved throughout the spring and summer months. For both species, prediction maps revealed considerable changes in habitat suitability throughout the season. Suitable humpback whale habitat occurred predominantly in ice-free areas, expanding southwards with the retreating sea ice edge, whereas suitable Antarctic minke whale habitat was consistently predicted within sea ice covered areas. Daily, large-scale prediction maps provide a valuable tool to design layout and timing of seismic surveys as they allow the identification and consideration of potential spatio-temporal hotspots to minimize potential impacts of seismic surveys on Antarctic cetacean species.
Extensions to the visual predictive check to facilitate model performance evaluation.
Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert
2008-04-01
The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.
Predicting bifurcation angle effect on blood flow in the microvasculature.
Yang, Jiho; Pak, Y Eugene; Lee, Tae-Rin
2016-11-01
Since blood viscosity is a basic parameter for understanding hemodynamics in human physiology, great amount of research has been done in order to accurately predict this highly non-Newtonian flow property. However, previous works lacked in consideration of hemodynamic changes induced by heterogeneous vessel networks. In this paper, the effect of bifurcation on hemodynamics in a microvasculature is quantitatively predicted. The flow resistance in a single bifurcation microvessel was calculated by combining a new simple mathematical model with 3-dimensional flow simulation for varying bifurcation angles under physiological flow conditions. Interestingly, the results indicate that flow resistance induced by vessel bifurcation holds a constant value of approximately 0.44 over the whole single bifurcation model below diameter of 60μm regardless of geometric parameters including bifurcation angle. Flow solutions computed from this new model showed substantial decrement in flow velocity relative to other mathematical models, which do not include vessel bifurcation effects, while pressure remained the same. Furthermore, when applying the bifurcation angle effect to the entire microvascular network, the simulation results gave better agreements with recent in vivo experimental measurements. This finding suggests a new paradigm in microvascular blood flow properties, that vessel bifurcation itself, regardless of its angle, holds considerable influence on blood viscosity, and this phenomenon will help to develop new predictive tools in microvascular research. Copyright © 2016 Elsevier Inc. All rights reserved.
DNA methylation-based age prediction from various tissues and body fluids
Jung, Sang-Eun; Shin, Kyoung-Jin; Lee, Hwan Young
2017-01-01
Aging is a natural and gradual process in human life. It is influenced by heredity, environment, lifestyle, and disease. DNA methylation varies with age, and the ability to predict the age of donor using DNA from evidence materials at a crime scene is of considerable value in forensic investigations. Recently, many studies have reported age prediction models based on DNA methylation from various tissues and body fluids. Those models seem to be very promising because of their high prediction accuracies. In this review, the changes of age-associated DNA methylation and the age prediction models for various tissues and body fluids were examined, and then the applicability of the DNA methylation-based age prediction method to the forensic investigations was discussed. This will improve the understandings about DNA methylation markers and their potential to be used as biomarkers in the forensic field, as well as the clinical field. PMID:28946940
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi; Foudriat, E. C.
1991-01-01
A modeling tool for both analysis and design of distributed systems is discussed. Since many research institutions have access to networks of workstations, the researchers decided to build a tool running on top of the workstations to function as a prototype as well as a distributed simulator for a computing system. The effects of system modeling on performance prediction in distributed systems and the effect of static locking and deadlocks on the performance predictions of distributed transactions are also discussed. While the probability of deadlock is considerably small, its effects on performance could be significant.
Kopp-Schneider, Annette; Prieto, Pilar; Kinsner-Ovaskainen, Agnieszka; Stanzel, Sven
2013-06-01
In the framework of toxicology, a testing strategy can be viewed as a series of steps which are taken to come to a final prediction about a characteristic of a compound under study. The testing strategy is performed as a single-step procedure, usually called a test battery, using simultaneously all information collected on different endpoints, or as tiered approach in which a decision tree is followed. Design of a testing strategy involves statistical considerations, such as the development of a statistical prediction model. During the EU FP6 ACuteTox project, several prediction models were proposed on the basis of statistical classification algorithms which we illustrate here. The final choice of testing strategies was not based on statistical considerations alone. However, without thorough statistical evaluations a testing strategy cannot be identified. We present here a number of observations made from the statistical viewpoint which relate to the development of testing strategies. The points we make were derived from problems we had to deal with during the evaluation of this large research project. A central issue during the development of a prediction model is the danger of overfitting. Procedures are presented to deal with this challenge. Copyright © 2012 Elsevier Ltd. All rights reserved.
Compaction of North-sea chalk by pore-failure and pressure solution in a producing reservoir
NASA Astrophysics Data System (ADS)
Keszthelyi, Daniel; Dysthe, Dag; Jamtveit, Bjorn
2016-02-01
The Ekofisk field, Norwegian North sea,is an example of compacting chalk reservoir with considerable subsequent seafloor subsidence due to petroleum production. Previously, a number of models were created to predict the compaction using different phenomenological approaches. Here we present a different approach, we use a new creep model based on microscopic mechanisms with no fitting parameters to predict strain rate at core scale and at reservoir scale. The model is able to reproduce creep experiments and the magnitude of the observed subsidence making it the first microstructural model which can explain the Ekofisk compaction.
Greg C. Liknes; Christopher W. Woodall; Charles H. Perry
2009-01-01
Climate information frequently is included in geospatial modeling efforts to improve the predictive capability of other data sources. The selection of an appropriate climate data source requires consideration given the number of choices available. With regard to climate data, there are a variety of parameters (e.g., temperature, humidity, precipitation), time intervals...
Liley, Helen; Zhang, Ju; Firth, Elwyn; Fernandez, Justin; Besier, Thor
2017-11-01
Population variance in bone shape is an important consideration when applying the results of subject-specific computational models to a population. In this letter, we demonstrate the ability of partial least squares regression to provide an improved shape prediction of the equine third metacarpal epiphysis, using two easily obtained measurements.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Vibration Response Models of a Stiffened Aluminum Plate Excited by a Shaker
NASA Technical Reports Server (NTRS)
Cabell, Randolph H.
2008-01-01
Numerical models of structural-acoustic interactions are of interest to aircraft designers and the space program. This paper describes a comparison between two energy finite element codes, a statistical energy analysis code, a structural finite element code, and the experimentally measured response of a stiffened aluminum plate excited by a shaker. Different methods for modeling the stiffeners and the power input from the shaker are discussed. The results show that the energy codes (energy finite element and statistical energy analysis) accurately predicted the measured mean square velocity of the plate. In addition, predictions from an energy finite element code had the best spatial correlation with measured velocities. However, predictions from a considerably simpler, single subsystem, statistical energy analysis model also correlated well with the spatial velocity distribution. The results highlight a need for further work to understand the relationship between modeling assumptions and the prediction results.
Marchitti, Satori A.; Fenton, Suzanne E.; Mendola, Pauline; Kenneke, John F.; Hines, Erin P.
2016-01-01
Background: Serum concentrations of polybrominated diphenyl ethers (PBDEs) in U.S. women are believed to be among the world’s highest; however, little information exists on the partitioning of PBDEs between serum and breast milk and how this may affect infant exposure. Objectives: Paired milk and serum samples were measured for PBDE concentrations in 34 women who participated in the U.S. EPA MAMA Study. Computational models for predicting milk PBDE concentrations from serum were evaluated. Methods: Samples were analyzed using gas chromatography isotope-dilution high-resolution mass spectrometry. Observed milk PBDE concentrations were compared with model predictions, and models were applied to NHANES serum data to predict milk PBDE concentrations and infant intakes for the U.S. population. Results: Serum and milk samples had detectable concentrations of most PBDEs. BDE-47 was found in the highest concentrations (median serum: 18.6; milk: 31.5 ng/g lipid) and BDE-28 had the highest milk:serum partitioning ratio (2.1 ± 0.2). No evidence of depuration was found. Models demonstrated high reliability and, as of 2007–2008, predicted U.S. milk concentrations of BDE-47, BDE-99, and BDE-100 appear to be declining but BDE-153 may be rising. Predicted infant intakes (ng/kg/day) were below threshold reference doses (RfDs) for BDE-99 and BDE-153 but above the suggested RfD for BDE-47. Conclusions: Concentrations and partitioning ratios of PBDEs in milk and serum from women in the U.S. EPA MAMA Study are presented for the first time; modeled predictions of milk PBDE concentrations using serum concentrations appear to be a valid method for estimating PBDE exposure in U.S. infants. Citation: Marchitti SA, Fenton SE, Mendola P, Kenneke JF, Hines EP. 2017. Polybrominated diphenyl ethers in human milk and serum from the U.S. EPA MAMA Study: modeled predictions of infant exposure and considerations for risk assessment. Environ Health Perspect 125:706–713; http://dx.doi.org/10.1289/EHP332 PMID:27405099
SEC proton prediction model: verification and analysis.
Balch, C C
1999-06-01
This paper describes a model that has been used at the NOAA Space Environment Center since the early 1970s as a guide for the prediction of solar energetic particle events. The algorithms for proton event probability, peak flux, and rise time are described. The predictions are compared with observations. The current model shows some ability to distinguish between proton event associated flares and flares that are not associated with proton events. The comparisons of predicted and observed peak flux show considerable scatter, with an rms error of almost an order of magnitude. Rise time comparisons also show scatter, with an rms error of approximately 28 h. The model algorithms are analyzed using historical data and improvements are suggested. Implementation of the algorithm modifications reduces the rms error in the log10 of the flux prediction by 21%, and the rise time rms error by 31%. Improvements are also realized in the probability prediction by deriving the conditional climatology for proton event occurrence given flare characteristics.
Reynolds, Gavin K; Campbell, Jacqueline I; Roberts, Ron J
2017-10-05
A new model to predict the compressibility and compactability of mixtures of pharmaceutical powders has been developed. The key aspect of the model is consideration of the volumetric occupancy of each powder under an applied compaction pressure and the respective contribution it then makes to the mixture properties. The compressibility and compactability of three pharmaceutical powders: microcrystalline cellulose, mannitol and anhydrous dicalcium phosphate have been characterised. Binary and ternary mixtures of these excipients have been tested and used to demonstrate the predictive capability of the model. Furthermore, the model is shown to be uniquely able to capture a broad range of mixture behaviours, including neutral, negative and positive deviations, illustrating its utility for formulation design. Copyright © 2017 Elsevier B.V. All rights reserved.
Advances in modeling trait-based plant community assembly.
Laughlin, Daniel C; Laughlin, David E
2013-10-01
In this review, we examine two new trait-based models of community assembly that predict the relative abundance of species from a regional species pool. The models use fundamentally different mathematical approaches and the predictions can differ considerably. Maxent obtains the most even probability distribution subject to community-weighted mean trait constraints. Traitspace predicts low probabilities for any species whose trait distribution does not pass through the environmental filter. Neither model maximizes functional diversity because of the emphasis on environmental filtering over limiting similarity. Traitspace can test for the effects of limiting similarity by explicitly incorporating intraspecific trait variation. The range of solutions in both models could be used to define the range of natural variability of community composition in restoration projects. Copyright © 2013 Elsevier Ltd. All rights reserved.
Nuclear Quadrupole Moments and Nuclear Shell Structure
DOE R&D Accomplishments Database
Townes, C. H.; Foley, H. M.; Low, W.
1950-06-23
Describes a simple model, based on nuclear shell considerations, which leads to the proper behavior of known nuclear quadrupole moments, although predictions of the magnitudes of some quadrupole moments are seriously in error.
Automated Predictive Big Data Analytics Using Ontology Based Semantics.
Nural, Mustafa V; Cotterell, Michael E; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A
2015-10-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology.
Automated Predictive Big Data Analytics Using Ontology Based Semantics
Nural, Mustafa V.; Cotterell, Michael E.; Peng, Hao; Xie, Rui; Ma, Ping; Miller, John A.
2017-01-01
Predictive analytics in the big data era is taking on an ever increasingly important role. Issues related to choice on modeling technique, estimation procedure (or algorithm) and efficient execution can present significant challenges. For example, selection of appropriate and optimal models for big data analytics often requires careful investigation and considerable expertise which might not always be readily available. In this paper, we propose to use semantic technology to assist data analysts and data scientists in selecting appropriate modeling techniques and building specific models as well as the rationale for the techniques and models selected. To formally describe the modeling techniques, models and results, we developed the Analytics Ontology that supports inferencing for semi-automated model selection. The SCALATION framework, which currently supports over thirty modeling techniques for predictive big data analytics is used as a testbed for evaluating the use of semantic technology. PMID:29657954
A Two-Process Model of Paragraph Development.
ERIC Educational Resources Information Center
Woodson, Linda
Paragraph writing mediated by imagery is richer, more flexible, and more creative than that produced by the somewhat impoverished, predictable, one-process model usually taught in composition classes. Since the writing advice given students differs considerably from the practice of professional writers, students should be given exercises that not…
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)
Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125
Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).
Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S
2017-01-01
This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.
Modelling of a holographic interferometry based calorimeter for radiation dosimetry
NASA Astrophysics Data System (ADS)
Beigzadeh, A. M.; Vaziri, M. R. Rashidian; Ziaie, F.
2017-08-01
In this research work, a model for predicting the behaviour of holographic interferometry based calorimeters for radiation dosimetry is introduced. Using this technique for radiation dosimetry via measuring the variations of refractive index due to energy deposition of radiation has several considerable advantages such as extreme sensitivity and ability of working without normally used temperature sensors that disturb the radiation field. We have shown that the results of our model are in good agreement with the experiments performed by other researchers under the same conditions. This model also reveals that these types of calorimeters have the additional and considerable merits of transforming the dose distribution to a set of discernible interference fringes.
Rapid ionization of the environment of SN 1987A
NASA Technical Reports Server (NTRS)
Raga, A. C.
1987-01-01
It has been suggested by some authors that IUE observations of the supernova SN 1987A show the presence of a strong component of the interstellar C IV 1550 and Si IV 1393 absorption lines at a velocity that approximately corresponds to the velocity of the LMC. It is possible that this component might come from originally neutral (or at least not very highly ionized) gas which has been photoionized by the initially very strong ionizing radiation field of the supernova. Theoretical considerations of this scenario lead to the study of fast (with velocities of about c) ionization fronts. It is shown that for reasonable model parameters it is possible to obtain considerably large C IV column densities, in agreement with the IUE observations. On the other hand, the models do not so easily predict the large Si IV column densities that are also obtained from the IUE observations. It is found that only models in which the interstellar medium surrounding SN 1987A is initially composed of already ionized hydrogen and helium predict substantial Si IV column densities. This result provides an interesting prediction of the ionization state of the environment of the presupernova star.
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Paulke, Alexander; Proschak, Ewgenij; Sommer, Kai; Achenbach, Janosch; Wunder, Cora; Toennes, Stefan W
2016-03-14
The number of new synthetic psychoactive compounds increase steadily. Among the group of these psychoactive compounds, the synthetic cannabinoids (SCBs) are most popular and serve as a substitute of herbal cannabis. More than 600 of these substances already exist. For some SCBs the in vitro cannabinoid receptor 1 (CB1) affinity is known, but for the majority it is unknown. A quantitative structure-activity relationship (QSAR) model was developed, which allows the determination of the SCBs affinity to CB1 (expressed as binding constant (Ki)) without reference substances. The chemically advance template search descriptor was used for vector representation of the compound structures. The similarity between two molecules was calculated using the Feature-Pair Distribution Similarity. The Ki values were calculated using the Inverse Distance Weighting method. The prediction model was validated using a cross validation procedure. The predicted Ki values of some new SCBs were in a range between 20 (considerably higher affinity to CB1 than THC) to 468 (considerably lower affinity to CB1 than THC). The present QSAR model can serve as a simple, fast and cheap tool to get a first hint of the biological activity of new synthetic cannabinoids or of other new psychoactive compounds. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Measurement and prediction of post-fire erosion at the hillslope scale, Colorado Front Range
Juan de Dios Benavides-Solorio; Lee H. MacDonald
2005-01-01
Post-fire soil erosion is of considerable concern because of the potential decline in site productivity and adverse effects on downstream resources. For the Colorado Front Range there is a paucity of post-fire erosion data and a corresponding lack of predictive models. This study measured hillslope-scale sediment production rates and site characteristics for three wild...
Coupling Radar Rainfall to Hydrological Models for Water Abstraction Management
NASA Astrophysics Data System (ADS)
Asfaw, Alemayehu; Shucksmith, James; Smith, Andrea; MacDonald, Ken
2015-04-01
The impacts of climate change and growing water use are likely to put considerable pressure on water resources and the environment. In the UK, a reform to surface water abstraction policy has recently been proposed which aims to increase the efficiency of using available water resources whilst minimising impacts on the aquatic environment. Key aspects to this reform include the consideration of dynamic rather than static abstraction licensing as well as introducing water trading concepts. Dynamic licensing will permit varying levels of abstraction dependent on environmental conditions (i.e. river flow and quality). The practical implementation of an effective dynamic abstraction strategy requires suitable flow forecasting techniques to inform abstraction asset management. Potentially the predicted availability of water resources within a catchment can be coupled to predicted demand and current storage to inform a cost effective water resource management strategy which minimises environmental impacts. The aim of this work is to use a historical analysis of UK case study catchment to compare potential water resource availability using modelled dynamic abstraction scenario informed by a flow forecasting model, against observed abstraction under a conventional abstraction regime. The work also demonstrates the impacts of modelling uncertainties on the accuracy of predicted water availability over range of forecast lead times. The study utilised a conceptual rainfall-runoff model PDM - Probability-Distributed Model developed by Centre for Ecology & Hydrology - set up in the Dove River catchment (UK) using 1km2 resolution radar rainfall as inputs and 15 min resolution gauged flow data for calibration and validation. Data assimilation procedures are implemented to improve flow predictions using observed flow data. Uncertainties in the radar rainfall data used in the model are quantified using artificial statistical error model described by Gaussian distribution and propagated through the model to assess its influence on the forecasted flow uncertainty. Furthermore, the effects of uncertainties at different forecast lead times on potential abstraction strategies are assessed. The results show that over a 10 year period, an average of approximately 70 ML/d of potential water is missed in the study catchment under a convention abstraction regime. This indicates a considerable potential for the use of flow forecasting models to effectively implement advanced abstraction management and more efficiently utilize available water resources in the study catchment.
Numerical Calculation Method for Prediction of Ground-borne Vibration near Subway Tunnel
NASA Astrophysics Data System (ADS)
Tsuno, Kiwamu; Furuta, Masaru; Abe, Kazuhisa
This paper describes the development of prediction method for ground-borne vibration from railway tunnels. Field measurement was carried out both in a subway shield tunnel, in the ground and on the ground surface. The generated vibration in the tunnel was calculated by means of the train/track/tunnel interaction model and was compared with the measurement results. On the other hand, wave propagation in the ground was calculated utilizing the empirical model, which was proposed based on the relationship between frequency and material damping coefficient α in order to predict the attenuation in the ground in consideration of frequency characteristics. Numerical calculation using 2-dimensinal FE analysis was also carried out in this research. The comparison between calculated and measured results shows that the prediction method including the model for train/track/tunnel interaction and that for wave propagation is applicable to the prediction of train-induced vibration propagated from railway tunnel.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot-induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Prediction of aircraft handling qualities using analytical models of the human pilot
NASA Technical Reports Server (NTRS)
Hess, R. A.
1982-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion for determining the susceptibility of an aircraft to pilot induced oscillations is formulated. Finally, a model based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Initialization and Predictability of a Coupled ENSO Forecast Model
NASA Technical Reports Server (NTRS)
Chen, Dake; Zebiak, Stephen E.; Cane, Mark A.; Busalacchi, Antonio J.
1997-01-01
The skill of a coupled ocean-atmosphere model in predicting ENSO has recently been improved using a new initialization procedure in which initial conditions are obtained from the coupled model, nudged toward observations of wind stress. The previous procedure involved direct insertion of wind stress observations, ignoring model feedback from ocean to atmosphere. The success of the new scheme is attributed to its explicit consideration of ocean-atmosphere coupling and the associated reduction of "initialization shock" and random noise. The so-called spring predictability barrier is eliminated, suggesting that such a barrier is not intrinsic to the real climate system. Initial attempts to generalize the nudging procedure to include SST were not successful; possible explanations are offered. In all experiments forecast skill is found to be much higher for the 1980s than for the 1970s and 1990s, suggesting decadal variations in predictability.
Mrozek, Piotr
2011-08-01
A numerical model explicitly considering the space-charge density evolved both under the mask and in the region of optical structure formation was used to predict the profiles of Ag concentration during field-assisted Ag(+)-Na(+) ion exchange channel waveguide fabrication. The influence of the unequal values of diffusion constants and mobilities of incoming and outgoing ions, the value of a correlation factor (Haven ratio), and particularly space-charge density induced during the ion exchange, on the resulting profiles of Ag concentration was analyzed and discussed. It was shown that the incorporation into the numerical model of a small quantity of highly mobile ions other than exclusively Ag(+) and Na(+) may considerably affect the range and shape of calculated Ag profiles in the multicomponent glass. The Poisson equation was used to predict the electric field spread evolution in the glass substrate. The results of the numerical analysis were verified by the experimental data of Ag concentration in a channel waveguide fabricated using a field-assisted process.
Modeling the impact of spatial relationships on horizontal curve safety.
Findley, Daniel J; Hummer, Joseph E; Rasdorf, William; Zegeer, Charles V; Fowler, Tyler J
2012-03-01
The curved segments of roadways are more hazardous because of the additional centripetalforces exerted on a vehicle, driver expectations, and other factors. The safety of a curve is dependent on various factors, most notably by geometric factors, but the location of a curve in relation to other curves is also thought to influence the safety of those curves because of a driver's expectation to encounter additional curves. The link between an individual curve's geometric characteristics and its safety performance has been established, but spatial considerations are typically not included in a safety analysis. The spatial considerations included in this research consisted of four components: distance to adjacent curves, direction of turn of the adjacent curves, and radius and length of the adjacent curves. The primary objective of this paper is to quantify the spatial relationship between adjacent horizontal curves and horizontal curve safety using a crash modification factor. Doing so enables a safety professional to more accurately estimate safety to allocate funding to reduce or prevent future collisions and more efficiently design new roadway sections to minimize crash risk where there will be a series of curves along a route. The most important finding from this research is the statistical significance of spatial considerations for the prediction of horizontal curve safety. The distances to adjacent curves were found to be a reliable predictor of observed collisions. This research recommends a model which utilizes spatial considerations for horizontal curve safety prediction in addition to current Highway Safety Manual prediction capabilities using individual curve geometric features. Copyright © 2011 Elsevier Ltd. All rights reserved.
Control of Systems With Slow Actuators Using Time Scale Separation
NASA Technical Reports Server (NTRS)
Stepanyan, Vehram; Nguyen, Nhan
2009-01-01
This paper addresses the problem of controlling a nonlinear plant with a slow actuator using singular perturbation method. For the known plant-actuator cascaded system the proposed scheme achieves tracking of a given reference model with considerably less control demand than would otherwise result when using conventional design techniques. This is the consequence of excluding the small parameter from the actuator dynamics via time scale separation. The resulting tracking error is within the order of this small parameter. For the unknown system the adaptive counterpart is developed based on the prediction model, which is driven towards the reference model by the control design. It is proven that the prediction model tracks the reference model with an error proportional to the small parameter, while the prediction error converges to zero. The resulting closed-loop system with all prediction models and adaptive laws remains stable. The benefits of the approach are demonstrated in simulation studies and compared to conventional control approaches.
NASA Astrophysics Data System (ADS)
Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza
2015-08-01
To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.
Roque, Carlos; Cardoso, João Lourenço
2014-02-01
Crash prediction models play a major role in highway safety analysis. These models can be used for various purposes, such as predicting the number of road crashes or establishing relationships between these crashes and different covariates. However, the appropriate choice for the functional form of these models is generally not discussed in research literature on road safety. In case of run-off-the-road crashes, empirical evidence and logical considerations lead to conclusion that the relationship between expected frequency and traffic flow is not monotonously increasing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Evaluation of Inelastic Constitutive Models for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1983-01-01
The influence of inelastic material models on computed stress-strain states, and therefore predicted lives, was studied for thermomechanically loaded structures. Nonlinear structural analyses were performed on a fatigue specimen which was subjected to thermal cycling in fluidized beds and on a mechanically load cycled benchmark notch specimen. Four incremental plasticity creep models (isotropic, kinematic, combined isotropic-kinematic, combined plus transient creep) were exercised. Of the plasticity models, kinematic hardening gave results most consistent with experimental observations. Life predictions using the computed strain histories at the critical location with a Strainrange Partitioning approach considerably overpredicted the crack initiation life of the thermal fatigue specimen.
Modeling Physiological Processes That Relate Toxicant Exposure and Bacterial Population Dynamics
Klanjscek, Tin; Nisbet, Roger M.; Priester, John H.; Holden, Patricia A.
2012-01-01
Quantifying effects of toxicant exposure on metabolic processes is crucial to predicting microbial growth patterns in different environments. Mechanistic models, such as those based on Dynamic Energy Budget (DEB) theory, can link physiological processes to microbial growth. Here we expand the DEB framework to include explicit consideration of the role of reactive oxygen species (ROS). Extensions considered are: (i) additional terms in the equation for the “hazard rate” that quantifies mortality risk; (ii) a variable representing environmental degradation; (iii) a mechanistic description of toxic effects linked to increase in ROS production and aging acceleration, and to non-competitive inhibition of transport channels; (iv) a new representation of the “lag time” based on energy required for acclimation. We estimate model parameters using calibrated Pseudomonas aeruginosa optical density growth data for seven levels of cadmium exposure. The model reproduces growth patterns for all treatments with a single common parameter set, and bacterial growth for treatments of up to 150 mg(Cd)/L can be predicted reasonably well using parameters estimated from cadmium treatments of 20 mg(Cd)/L and lower. Our approach is an important step towards connecting levels of biological organization in ecotoxicology. The presented model reveals possible connections between processes that are not obvious from purely empirical considerations, enables validation and hypothesis testing by creating testable predictions, and identifies research required to further develop the theory. PMID:22328915
Electrical conductivity modeling and experimental study of densely packed SWCNT networks.
Jack, D A; Yeh, C-S; Liang, Z; Li, S; Park, J G; Fielding, J C
2010-05-14
Single-walled carbon nanotube (SWCNT) networks have become a subject of interest due to their ability to support structural, thermal and electrical loadings, but to date their application has been hindered due, in large part, to the inability to model macroscopic responses in an industrial product with any reasonable confidence. This paper seeks to address the relationship between macroscale electrical conductivity and the nanostructure of a dense network composed of SWCNTs and presents a uniquely formulated physics-based computational model for electrical conductivity predictions. The proposed model incorporates physics-based stochastic parameters for the individual nanotubes to construct the nanostructure such as: an experimentally obtained orientation distribution function, experimentally derived length and diameter distributions, and assumed distributions of chirality and registry of individual CNTs. Case studies are presented to investigate the relationship between macroscale conductivity and nanostructured variations in the bulk stochastic length, diameter and orientation distributions. Simulation results correspond nicely with those available in the literature for case studies of conductivity versus length and conductivity versus diameter. In addition, predictions for the increasing anisotropy of the bulk conductivity as a function of the tube orientation distribution are in reasonable agreement with our experimental results. Examples are presented to demonstrate the importance of incorporating various stochastic characteristics in bulk conductivity predictions. Finally, a design consideration for industrial applications is discussed based on localized network power emission considerations and may lend insight to the design engineer to better predict network failure under high current loading applications.
Comparative Analysis of Hybrid Models for Prediction of BP Reactivity to Crossed Legs.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2017-01-01
Crossing the legs at the knees, during BP measurement, is one of the several physiological stimuli that considerably influence the accuracy of BP measurements. Therefore, it is paramount to develop an appropriate prediction model for interpreting influence of crossed legs on BP. This research work described the use of principal component analysis- (PCA-) fused forward stepwise regression (FSWR), artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS), and least squares support vector machine (LS-SVM) models for prediction of BP reactivity to crossed legs among the normotensive and hypertensive participants. The evaluation of the performance of the proposed prediction models using appropriate statistical indices showed that the PCA-based LS-SVM (PCA-LS-SVM) model has the highest prediction accuracy with coefficient of determination ( R 2 ) = 93.16%, root mean square error (RMSE) = 0.27, and mean absolute percentage error (MAPE) = 5.71 for SBP prediction in normotensive subjects. Furthermore, R 2 = 96.46%, RMSE = 0.19, and MAPE = 1.76 for SBP prediction and R 2 = 95.44%, RMSE = 0.21, and MAPE = 2.78 for DBP prediction in hypertensive subjects using the PCA-LSSVM model. This assessment presents the importance and advantages posed by hybrid computing models for the prediction of variables in biomedical research studies.
NASA Technical Reports Server (NTRS)
Coats, Timothy W.; Harris, Charles E.
1995-01-01
The durability and damage tolerance of laminated composites are critical design considerations for airframe composite structures. Therefore, the ability to model damage initiation and growth and predict the life of laminated composites is necessary to achieve structurally efficient and economical designs. The purpose of this research is to experimentally verify the application of a continuum damage model to predict progressive damage development in a toughened material system. Damage due to monotonic and tension-tension fatigue was documented for IM7/5260 graphite/bismaleimide laminates. Crack density and delamination surface area were used to calculate matrix cracking and delamination internal state variables to predict stiffness loss in unnotched laminates. A damage dependent finite element code predicted the stiffness loss for notched laminates with good agreement to experimental data. It was concluded that the continuum damage model can adequately predict matrix damage progression in notched and unnotched laminates as a function of loading history and laminate stacking sequence.
NASA Astrophysics Data System (ADS)
Ge, Honghao; Ren, Fengli; Li, Jun; Han, Xiujun; Xia, Mingxu; Li, Jianguo
2017-03-01
A four-phase dendritic model was developed to predict the macrosegregation, shrinkage cavity, and porosity during solidification. In this four-phase dendritic model, some important factors, including dendritic structure for equiaxed crystals, melt convection, crystals sedimentation, nucleation, growth, and shrinkage of solidified phases, were taken into consideration. Furthermore, in this four-phase dendritic model, a modified shrinkage criterion was established to predict shrinkage porosity (microporosity) of a 55-ton industrial Fe-3.3 wt pct C ingot. The predicted macrosegregation pattern and shrinkage cavity shape are in a good agreement with experimental results. The shrinkage cavity has a significant effect on the formation of positive segregation in hot top region, which generally forms during the last stage of ingot casting. The dendritic equiaxed grains also play an important role on the formation of A-segregation. A three-dimensional laminar structure of A-segregation in industrial ingot was, for the first time, predicted by using a 3D case simulation.
A constant radius of curvature model for the organization of DNA in toroidal condensates.
Hud, N V; Downing, K H; Balhorn, R
1995-01-01
Toroidal DNA condensates have received considerable attention for their possible relationship to the packaging of DNA in viruses and in general as a model of ordered DNA condensation. A spool-like model has primarily been supported for DNA organization within toroids. However, our observations suggest that the actual organization may be considerably different. We present an alternate model in which DNA for a given toroid is organized within a series of equally sized contiguous loops that precess about the toroid axis. A related model for the toroid formation process is also presented. This kinetic model predicts a distribution of toroid sizes for DNA condensed from solution that is in good agreement with experimental data. Images Fig. 1 Fig. 2 Fig. 3 Fig. 5 PMID:7724602
Low Speed Analysis of Mission Adaptive Flaps on a High Speed Civil Transport Configuration
NASA Technical Reports Server (NTRS)
Lessard, Victor R.
1999-01-01
Thin-layer Navier-Stokes analyses were done on a high speed civil transport configuration with mission adaptive leading-edge flaps. The flow conditions simulated were Mach = 0.22 and Reynolds number of 4.27 million for angles-of-attack ranging from 0 to 18 degrees. Two turbulence closure models were used. Analyses were done exclusively with the Baldwin-Lomax turbulence model at low angle-of-attack conditions. At high angles-of-attack where considerable flow separation and vortices occurred the Spalart-Allmaras turbulence model was also considered. The effects of flow transition were studied. Predicted aerodynamic forces, moment, and pressure are compared to experimental data obtained in the 14- by 22-Foot Subsonic Tunnel at NASA Langley. The forces and moments correlated well with experimental data in terms of trends. Drag and pitching moment were consistently underpredicted. Predicted surface pressures compared well with experiment at low angles-of-attack. Above 10 angle-of-attack the pressure comparisons were not as favorable. The two turbulent models affected the pressures on the flap considerably and neither produced correct results at the high angles-of-attack.
A Model of BGA Thermal Fatigue Life Prediction Considering Load Sequence Effects
Hu, Weiwei; Li, Yaqiu; Sun, Yufeng; Mosleh, Ali
2016-01-01
Accurate testing history data is necessary for all fatigue life prediction approaches, but such data is always deficient especially for the microelectronic devices. Additionally, the sequence of the individual load cycle plays an important role in physical fatigue damage. However, most of the existing models based on the linear damage accumulation rule ignore the sequence effects. This paper proposes a thermal fatigue life prediction model for ball grid array (BGA) packages to take into consideration the load sequence effects. For the purpose of improving the availability and accessibility of testing data, a new failure criterion is discussed and verified by simulation and experimentation. The consequences for the fatigue underlying sequence load conditions are shown. PMID:28773980
Individualized Prediction of Reading Comprehension Ability Using Gray Matter Volume.
Cui, Zaixu; Su, Mengmeng; Li, Liangjie; Shu, Hua; Gong, Gaolang
2018-05-01
Reading comprehension is a crucial reading skill for learning and putatively contains 2 key components: reading decoding and linguistic comprehension. Current understanding of the neural mechanism underlying these reading comprehension components is lacking, and whether and how neuroanatomical features can be used to predict these 2 skills remain largely unexplored. In the present study, we analyzed a large sample from the Human Connectome Project (HCP) dataset and successfully built multivariate predictive models for these 2 skills using whole-brain gray matter volume features. The results showed that these models effectively captured individual differences in these 2 skills and were able to significantly predict these components of reading comprehension for unseen individuals. The strict cross-validation using the HCP cohort and another independent cohort of children demonstrated the model generalizability. The identified gray matter regions contributing to the skill prediction consisted of a wide range of regions covering the putative reading, cerebellum, and subcortical systems. Interestingly, there were gender differences in the predictive models, with the female-specific model overestimating the males' abilities. Moreover, the identified contributing gray matter regions for the female-specific and male-specific models exhibited considerable differences, supporting a gender-dependent neuroanatomical substrate for reading comprehension.
Basic numerical competences in large-scale assessment data: Structure and long-term relevance.
Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian
2018-03-01
Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.
An analytical approach for predicting pilot induced oscillations
NASA Technical Reports Server (NTRS)
Hess, R. A.
1981-01-01
The optimal control model (OCM) of the human pilot is applied to the study of aircraft handling qualities. Attention is focused primarily on longitudinal tasks. The modeling technique differs from previous applications of the OCM in that considerable effort is expended in simplifying the pilot/vehicle analysis. After briefly reviewing the OCM, a technique for modeling the pilot controlling higher order systems is introduced. Following this, a simple criterion or determining the susceptability of an aircraft to pilot induced oscillations (PIO) is formulated. Finally, a model-based metric for pilot rating prediction is discussed. The resulting modeling procedure provides a relatively simple, yet unified approach to the study of a variety of handling qualities problems.
Critical Zone Architecture and the Last Glacial Legacy in Unglaciated North America
NASA Astrophysics Data System (ADS)
Marshall, J. A.; Roering, J. J.; Rempel, A. W.; Bartlein, P. J.; Merritts, D. J.; Walter, R. C.
2015-12-01
As fresh bedrock is exhumed into the Critical Zone and intersects with water and life, rock attributes controlling geochemical reactions, hydrologic routing, accommodation space for roots, surface area, and the mobile fraction of regolith are set not just by present-day processes, but are predicated on the 'ghosts' of past processes embedded in the subsurface architecture. Easily observable modern ecosystem processes such as tree throw can erase the past and bias our interpretation of landscape evolution. Abundant paleoenvironmental records demonstrate that unglaciated regions experienced profound climate changes through the late Pleistocene-Holocene transition, but studies quantifying how environmental variables affect erosion and weathering rates in these settings often marginalize or even forego consideration of the role of past climate regimes. Here we combine seven downscaled Last Glacial Maximum (LGM) paleoclimate reconstructions with a state of the art frost cracking model to explore frost weathering potential across the North American continent 21 ka. We analyze existing evidence of LGM periglacial processes and features to better constrain frost weathering model predictions. All seven models predict frost cracking across a large swath to the west of the Continental Divide, with the southernmost extent at ~ latitude 35° N, and increasing latitude towards the buffering influence of the Pacific Ocean. All models predict significant frost cracking in the unglaciated Rocky Mountains. To the east of the Continental Divide, models results diverge more, but all predict regions with LGM temperatures too cold for significant frost cracking (mean annual temperatures < 15 °C), corroborated by observations of permafrost relics such as ice wedges in some areas. Our results provide a framework for coupling paleoclimate reconstructions with a predictive frost weathering model, and importantly, suggest that modeling modern Critical Zone process evolution may require a consideration of vastly different processes when rock was first exhumed into the Critical Zone reactor.
Link prediction measures considering different neighbors’ effects and application in social networks
NASA Astrophysics Data System (ADS)
Luo, Peng; Wu, Chong; Li, Yongli
Link prediction measures have been attracted particular attention in the field of mathematical physics. In this paper, we consider the different effects of neighbors in link prediction and focus on four different situations: only consider the individual’s own effects; consider the effects of individual, neighbors and neighbors’ neighbors; consider the effects of individual, neighbors, neighbors’ neighbors, neighbors’ neighbors’ neighbors and neighbors’ neighbors’ neighbors’ neighbors; consider the whole network participants’ effects. Then, according to the four situations, we present our link prediction models which also take the effects of social characteristics into consideration. An artificial network is adopted to illustrate the parameter estimation based on logistic regression. Furthermore, we compare our methods with the some other link prediction methods (LPMs) to examine the validity of our proposed model in online social networks. The results show the superior of our proposed link prediction methods compared with others. In the application part, our models are applied to study the social network evolution and used to recommend friends and cooperators in social networks.
Du, Lihong; White, Robert L
2009-02-01
A previously proposed partition equilibrium model for quantitative prediction of analyte response in electrospray ionization mass spectrometry is modified to yield an improved linear relationship. Analyte mass spectrometer response is modeled by a competition mechanism between analyte and background electrolytes that is based on partition equilibrium considerations. The correlation between analyte response and solution composition is described by the linear model over a wide concentration range and the improved model is shown to be valid for a wide range of experimental conditions. The behavior of an analyte in a salt solution, which could not be explained by the original model, is correctly predicted. The ion suppression effects of 16:0 lysophosphatidylcholine (LPC) on analyte signals are attributed to a combination of competition for excess charge and reduction of total charge due to surface tension effects. In contrast to the complicated mathematical forms that comprise the original model, the simplified model described here can more easily be employed to predict analyte mass spectrometer responses for solutions containing multiple components. Copyright (c) 2008 John Wiley & Sons, Ltd.
A comparison of different functions for predicted protein model quality assessment.
Li, Juan; Fang, Huisheng
2016-07-01
In protein structure prediction, a considerable number of models are usually produced by either the Template-Based Method (TBM) or the ab initio prediction. The purpose of this study is to find the critical parameter in assessing the quality of the predicted models. A non-redundant template library was developed and 138 target sequences were modeled. The target sequences were all distant from the proteins in the template library and were aligned with template library proteins on the basis of the transformation matrix. The quality of each model was first assessed with QMEAN and its six parameters, which are C_β interaction energy (C_beta), all-atom pairwise energy (PE), solvation energy (SE), torsion angle energy (TAE), secondary structure agreement (SSA), and solvent accessibility agreement (SAE). Finally, the alignment score (score) was also used to assess the quality of model. Hence, a total of eight parameters (i.e., QMEAN, C_beta, PE, SE, TAE, SSA, SAE, score) were independently used to assess the quality of each model. The results indicate that SSA is the best parameter to estimate the quality of the model.
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
NASA Astrophysics Data System (ADS)
Faizan-Ur-Rab, M.; Zahiri, S. H.; Masood, S. H.; Jahedi, M.; Nagarajah, R.
2017-06-01
This study presents the validation of a developed three-dimensional multicomponent model for cold spray process using two particle image velocimetry (PIV) experiments. The k- ɛ type 3D model developed for spherical titanium particles was validated with the measured titanium particle velocity within a nitrogen and helium supersonic jet. The 3D model predicted lower values of particle velocity than the PIV experimental study that used irregularly shaped titanium particles. The results of the 3D model were consistent with the PIV experiment that used spherical titanium powder. The 3D model simulation of particle velocity within the helium and nitrogen jet was coupled with an estimation of titanium particle temperature. This was achieved with the consideration of the fact that cold spray particle temperature is difficult and expensive to measure due to considerably lower temperature of particles than thermal spray. The model predicted an interesting pattern of particle size distribution with respect to the location of impact with a concentration of finer particles close to the jet center. It is believed that the 3D model outcomes for particle velocity, temperature and location could be a useful tool to optimize system design, deposition process and mechanical properties of the additively manufactured cold spray structures.
Development of finite element models to predict dynamic bridge response.
DOT National Transportation Integrated Search
1997-10-01
Dynamic response has long been recognized as one of the significant factors affecting the service life and safety of bridge structures. Even though considerable research, both analytical and experimental, has been devoted to dynamic bridge behavior, ...
Ruiz-Navarro, Ana; Gillingham, Phillipa K; Britton, J Robert
2016-09-01
Predictions of species responses to climate change often focus on distribution shifts, although responses can also include shifts in body sizes and population demographics. Here, shifts in the distributional ranges ('climate space'), body sizes (as maximum theoretical body sizes, L∞) and growth rates (as rate at which L∞ is reached, K) were predicted for five fishes of the Cyprinidae family in a temperate region over eight climate change projections. Great Britain was the model area, and the model species were Rutilus rutilus, Leuciscus leuciscus, Squalius cephalus, Gobio gobio and Abramis brama. Ensemble models predicted that the species' climate spaces would shift in all modelled projections, with the most drastic changes occurring under high emissions; all range centroids shifted in a north-westerly direction. Predicted climate space expanded for R. rutilus and A. brama, contracted for S. cephalus, and for L. leuciscus and G. gobio, expanded under low-emission scenarios but contracted under high emissions, suggesting the presence of some climate-distribution thresholds. For R. rutilus, A. brama, S. cephalus and G. gobio, shifts in their climate space were coupled with predicted shifts to significantly smaller maximum body sizes and/or faster growth rates, aligning strongly to aspects of temperature-body size theory. These predicted shifts in L∞ and K had considerable consequences for size-at-age per species, suggesting substantial alterations in population age structures and abundances. Thus, when predicting climate change outcomes for species, outputs that couple shifts in climate space with altered body sizes and growth rates provide considerable insights into the population and community consequences, especially for species that cannot easily track their thermal niches. © 2016 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Khan, Irfan; Costeux, Stephane; Bunker, Shana; Moore, Jonathan; Kar, Kishore
2012-11-01
Nanocellular porous materials present unusual optical, dielectric, thermal and mechanical properties and are thus envisioned to find use in a variety of applications. Thermoplastic polymeric foams show considerable promise in achieving these properties. However, there are still considerable challenges in achieving nanocellular foams with densities as low as conventional foams. Lack of in-depth understanding of the effect of process parameters and physical properties on the foaming process is a major obstacle. A numerical model has been developed to simulate the simultaneous nucleation and bubble growth during depressurization of thermoplastic polymers saturated with supercritical blowing agents. The model is based on the popular ``Influence Volume Approach,'' which assumes a growing boundary layer with depleted blowing agent surrounds each bubble. Classical nucleation theory is used to predict the rate of nucleation of bubbles. By solving the mass balance, momentum balance and species conservation equations for each bubble, the model is capable of predicting average bubble size, bubble size distribution and bulk porosity. The model is modified to include mechanisms for Joule-Thompson cooling during depressurization and secondary foaming. Simulation results for polymer with and without nucleating agents will be discussed and compared with experimental data.
Life prediction technologies for aeronautical propulsion systems
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.
1987-01-01
Fatigue and fracture problems continue to occur in aeronautical gas turbine engines. Components whose useful life is limited by these failure modes include turbine hot-section blades, vanes and disks. Safety considerations dictate that catastrophic failures be avoided, while economic considerations dictate that noncatastrophic failures occur as infrequently as possible. The design decision is therefore in making the tradeoff between engine performance and durability. The NASA Lewis Research Center has contributed to the aeropropulsion industry in the areas of life prediction technology for 30 years, developing creep and fatigue life prediction methodologies for hot-section materials. Emphasis is placed on the development of methods capable of handling both thermal and mechanical fatigue under severe environments. Recent accomplishments include the development of more accurate creep-fatigue life prediction methods such as the total strain version of Lewis' Strainrange Partitioning (SRP) and the HOST-developed Cyclic Damage Accumulation (CDA) model. Other examples include the Double Damage Curve Approach (DDCA), which provides greatly improved accuracy for cumulative fatigue design rules.
Valerio, Laura; North, Ace; Collins, C. Matilda; Mumford, John D.; Facchinelli, Luca; Spaccapelo, Roberta; Benedict, Mark Q.
2016-01-01
The persistence of transgenes in the environment is a consideration in risk assessments of transgenic organisms. Combining mathematical models that predict the frequency of transgenes and experimental demonstrations can validate the model predictions, or can detect significant biological deviations that were neither apparent nor included as model parameters. In order to assess the correlation between predictions and observations, models were constructed to estimate the frequency of a transgene causing male sexual sterility in simulated populations of a malaria mosquito Anopheles gambiae that were seeded with transgenic females at various proportions. Concurrently, overlapping-generation laboratory populations similar to those being modeled were initialized with various starting transgene proportions, and the subsequent proportions of transgenic individuals in populations were determined weekly until the transgene disappeared. The specific transgene being tested contained a homing endonuclease gene expressed in testes, I-PpoI, that cleaves the ribosomal DNA and results in complete male sexual sterility with no effect on female fertility. The transgene was observed to disappear more rapidly than the model predicted in all cases. The period before ovipositions that contained no transgenic progeny ranged from as little as three weeks after cage initiation to as long as 11 weeks. PMID:27669312
Airloads, wakes, and aeroelasticity
NASA Technical Reports Server (NTRS)
Johnson, Wayne
1990-01-01
Fundamental considerations regarding the theory of modeling of rotary wing airloads, wakes, and aeroelasticity are presented. The topics covered are: airloads and wakes, including lifting-line theory, wake models and nonuniform inflow, free wake geometry, and blade-vortex interaction; aerodynamic and wake models for aeroelasticity, including two-dimensional unsteady aerodynamics and dynamic inflow; and airloads and structural dynamics, including comprehensive airload prediction programs. Results of calculations and correlations are presented.
Dry Chemical Development - A Model for the Extinction of Hydrocarbon Flames.
1984-02-08
and predicts the suppression effectiveness of a wide variety of gaseous, liquid, and solid agents . The flame extinguishment model is based on the...generalized by consideration of all endothermic reaction sinks, eg., vaporization, dissociation, and decomposition. The general equation correlates...CHEMICAL DEVELOPMENT - A MODEL FOR THE EXTINCTION OF HYDROCARBON FLAMES Various fire-extinguishing agents are carried on board Navy ships to control
Kondjoyan, Alain; Oillic, Samuel; Portanguen, Stéphane; Gros, Jean-Bernard
2013-10-01
A heat transfer model was used to simulate the temperature in 3 dimensions inside the meat. This model was combined with a first-order kinetic models to predict cooking losses. Identification of the parameters of the kinetic models and first validations were performed in a water bath. Afterwards, the performance of the combined model was determined in a fan-assisted oven under different air/steam conditions. Accurate knowledge of the heat transfer coefficient values and consideration of the retraction of the meat pieces are needed for the prediction of meat temperature. This is important since the temperature at the center of the product is often used to determine the cooking time. The combined model was also able to predict cooking losses from meat pieces of different sizes and subjected to different air/steam conditions. It was found that under the studied conditions, most of the water loss comes from the juice expelled by protein denaturation and contraction and not from evaporation. Copyright © 2013 Elsevier Ltd. All rights reserved.
Elissen, Arianne M J; Struijs, Jeroen N; Baan, Caroline A; Ruwaard, Dirk
2015-05-01
To support providers and commissioners in accurately assessing their local populations' health needs, this study produces an overview of Dutch predictive risk models for health care, focusing specifically on the type, combination and relevance of included determinants for achieving the Triple Aim (improved health, better care experience, and lower costs). We conducted a mixed-methods study combining document analyses, interviews and a Delphi study. Predictive risk models were identified based on a web search and expert input. Participating in the study were Dutch experts in predictive risk modelling (interviews; n=11) and experts in healthcare delivery, insurance and/or funding methodology (Delphi panel; n=15). Ten predictive risk models were analysed, comprising 17 unique determinants. Twelve were considered relevant by experts for estimating community health needs. Although some compositional similarities were identified between models, the combination and operationalisation of determinants varied considerably. Existing predictive risk models provide a good starting point, but optimally balancing resources and targeting interventions on the community level will likely require a more holistic approach to health needs assessment. Development of additional determinants, such as measures of people's lifestyle and social network, may require policies pushing the integration of routine data from different (healthcare) sources. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Tang, Jessica Janice; Leka, Stavroula; Hunt, Nigel; MacLennan, Sara
2014-07-01
It is widely acknowledged that teachers are at greater risk of work-related health problems. At the same time, employee perceptions of different dimensions of organizational climate can influence their attitudes, performance, and well-being at work. This study applied and extended a safety climate model in the context of the education sector in Hong Kong. Apart from safety considerations alone, the study included occupational health considerations and social capital and tested their relationships with occupational safety and health (OSH) outcomes. Seven hundred and four Hong Kong teachers completed a range of questionnaires exploring social capital, OSH climate, OSH knowledge, OSH performance (compliance and participation), general health, and self-rated health complaints and injuries. Structural equation modeling (SEM) was used to analyze the relationships between predictive and outcome variables. SEM analysis revealed a high level of goodness of fit, and the hypothesized model including social capital yielded a better fit than the original model. Social capital, OSH climate, and OSH performance were determinants of both positive and negative outcome variables. In addition, social capital not only significantly predicted general health directly, but also had a predictive effect on the OSH climate-behavior-outcome relationship. This study makes a contribution to the workplace social capital and OSH climate literature by empirically assessing their relationship in the Chinese education sector.
Man, V; Polzer, S; Gasser, T C; Novotny, T; Bursa, J
2018-03-01
Biomechanics-based assessment of Abdominal Aortic Aneurysm (AAA) rupture risk has gained considerable scientific and clinical momentum. However, computation of peak wall stress (PWS) using state-of-the-art finite element models is time demanding. This study investigates which features of the constitutive description of AAA wall are decisive for achieving acceptable stress predictions in it. Influence of five different isotropic constitutive descriptions of AAA wall is tested; models reflect realistic non-linear, artificially stiff non-linear, or artificially stiff pseudo-linear constitutive descriptions of AAA wall. Influence of the AAA wall model is tested on idealized (n=4) and patient-specific (n=16) AAA geometries. Wall stress computations consider a (hypothetical) load-free configuration and include residual stresses homogenizing the stresses across the wall. Wall stress differences amongst the different descriptions were statistically analyzed. When the qualitatively similar non-linear response of the AAA wall with low initial stiffness and subsequent strain stiffening was taken into consideration, wall stress (and PWS) predictions did not change significantly. Keeping this non-linear feature when using an artificially stiff wall can save up to 30% of the computational time, without significant change in PWS. In contrast, a stiff pseudo-linear elastic model may underestimate the PWS and is not reliable for AAA wall stress computations. Copyright © 2018 IPEM. Published by Elsevier Ltd. All rights reserved.
MetaDP: a comprehensive web server for disease prediction of 16S rRNA metagenomic datasets.
Xu, Xilin; Wu, Aiping; Zhang, Xinlei; Su, Mingming; Jiang, Taijiao; Yuan, Zhe-Ming
2016-01-01
High-throughput sequencing-based metagenomics has garnered considerable interest in recent years. Numerous methods and tools have been developed for the analysis of metagenomic data. However, it is still a daunting task to install a large number of tools and complete a complicated analysis, especially for researchers with minimal bioinformatics backgrounds. To address this problem, we constructed an automated software named MetaDP for 16S rRNA sequencing data analysis, including data quality control, operational taxonomic unit clustering, diversity analysis, and disease risk prediction modeling. Furthermore, a support vector machine-based prediction model for intestinal bowel syndrome (IBS) was built by applying MetaDP to microbial 16S sequencing data from 108 children. The success of the IBS prediction model suggests that the platform may also be applied to other diseases related to gut microbes, such as obesity, metabolic syndrome, or intestinal cancer, among others (http://metadp.cn:7001/).
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
Groundwater model of the Blue River basin, Nebraska-Twenty years later
Alley, W.M.; Emery, P.A.
1986-01-01
Groundwater flow models have become almost a routine tool of the practicing hydrologist. Yet, surprisingly little attention has been given to true verification analysis of studies using these models. This paper examines predictions for 1982 of water-level declines and streamflow depletions that were made in 1965 using an electric analog groundwater model of the Blue River basin in southeastern Nebraska. Analysis of the model's predictions suggests that the analog model used too low an estimate of net groundwater withdrawals, yet overestimated water-level declines. The model predicted that almost all of the net groundwater pumpage would come from storage in the Pleistocene aquifer within the Blue River basin. It appears likely that the model underestimated the contributions of other sources of water to the pumpage, and that the aquifer storage coefficients used in the model were too low. There is some evidence that groundwater pumpage has had a greater than predicted effect on streamflow. Considerable uncertainty about the basic conceptualization of the hydrology of the Blue River basin greatly limits the reliability of groundwater models developed for the basin. The paper concludes with general perspectives on groundwater modeling gained from this post-audit analysis. ?? 1986.
Seasonal forecasting of high wind speeds over Western Europe
NASA Astrophysics Data System (ADS)
Palutikof, J. P.; Holt, T.
2003-04-01
As financial losses associated with extreme weather events escalate, there is interest from end users in the forestry and insurance industries, for example, in the development of seasonal forecasting models with a long lead time. This study uses exceedences of the 90th, 95th, and 99th percentiles of daily maximum wind speed over the period 1958 to present to derive predictands of winter wind extremes. The source data is the 6-hourly NCEP Reanalysis gridded surface wind field. Predictor variables include principal components of Atlantic sea surface temperature and several indices of climate variability, including the NAO and SOI. Lead times of up to a year are considered, in monthly increments. Three regression techniques are evaluated; multiple linear regression (MLR), principal component regression (PCR), and partial least squares regression (PLS). PCR and PLS proved considerably superior to MLR with much lower standard errors. PLS was chosen to formulate the predictive model since it offers more flexibility in experimental design and gave slightly better results than PCR. The results indicate that winter windiness can be predicted with considerable skill one year ahead for much of coastal Europe, but that this deteriorates rapidly in the hinterland. The experiment succeeded in highlighting PLS as a very useful method for developing more precise forecasting models, and in identifying areas of high predictability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Jonathan; Thompson, Sandra E.; Brothers, Alan J.
The ability to estimate the likelihood of future events based on current and historical data is essential to the decision making process of many government agencies. Successful predictions related to terror events and characterizing the risks will support development of options for countering these events. The predictive tasks involve both technical and social component models. The social components have presented a particularly difficult challenge. This paper outlines some technical considerations of this modeling activity. Both data and predictions associated with the technical and social models will likely be known with differing certainties or accuracies – a critical challenge is linkingmore » across these model domains while respecting this fundamental difference in certainty level. This paper will describe the technical approach being taken to develop the social model and identification of the significant interfaces between the technical and social modeling in the context of analysis of diversion of nuclear material.« less
Reactor pressure vessel embrittlement: Insights from neural network modelling
NASA Astrophysics Data System (ADS)
Mathew, J.; Parfitt, D.; Wilford, K.; Riddle, N.; Alamaniotis, M.; Chroneos, A.; Fitzpatrick, M. E.
2018-04-01
Irradiation embrittlement of steel pressure vessels is an important consideration for the operation of current and future light water nuclear reactors. In this study we employ an ensemble of artificial neural networks in order to provide predictions of the embrittlement using two literature datasets, one based on US surveillance data and the second from the IVAR experiment. We use these networks to examine trends with input variables and to assess various literature models including compositional effects and the role of flux and temperature. Overall, the networks agree with the existing literature models and we comment on their more general use in predicting irradiation embrittlement.
NASA Astrophysics Data System (ADS)
Zaghloul, Mofreh R.
2018-03-01
We present estimates of the critical properties, thermodynamic functions, and principal shock Hugoniot of hot dense aluminum fluid as predicted from a chemical model for the equation-of-state of hot dense, partially ionized and partially degenerate plasma. The essential features of strongly coupled plasma of metal vapors, such as multiple ionization, Coulomb interactions among charged particles, partial degeneracy, and intensive short range hard core repulsion are taken into consideration. Internal partition functions of neutral, excited, and multiply ionized species are carefully evaluated in a statistical-mechanically consistent way. Results predicted from the present model are presented, analyzed and compared with available experimental measurements and other theoretical predictions in the literature.
Mei, Suyu; Zhu, Hao
2015-01-26
Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.
von Ruesten, Anne; Steffen, Annika; Floegel, Anna; van der A, Daphne L.; Masala, Giovanna; Tjønneland, Anne; Halkjaer, Jytte; Palli, Domenico; Wareham, Nicholas J.; Loos, Ruth J. F.; Sørensen, Thorkild I. A.; Boeing, Heiner
2011-01-01
Objective To investigate trends in obesity prevalence in recent years and to predict the obesity prevalence in 2015 in European populations. Methods Data of 97 942 participants from seven cohorts involved in the European Prospective Investigation into Cancer and Nutrition (EPIC) study participating in the Diogenes project (named as “Diogenes cohort” in the following) with weight measurements at baseline and follow-up were used to predict future obesity prevalence with logistic linear and non-linear (leveling off) regression models. In addition, linear and leveling off models were fitted to the EPIC-Potsdam dataset with five weight measures during the observation period to find out which of these two models might provide the more realistic prediction. Results During a mean follow-up period of 6 years, the obesity prevalence in the Diogenes cohort increased from 13% to 17%. The linear prediction model predicted an overall obesity prevalence of about 30% in 2015, whereas the leveling off model predicted a prevalence of about 20%. In the EPIC-Potsdam cohort, the shape of obesity trend favors a leveling off model among men (R2 = 0.98), and a linear model among women (R2 = 0.99). Conclusion Our data show an increase in obesity prevalence since the 1990ies, and predictions by 2015 suggests a sizeable further increase in European populations. However, the estimates from the leveling off model were considerably lower. PMID:22102897
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Quantizing and sampling considerations in digital phased-locked loops
NASA Technical Reports Server (NTRS)
Hurst, G. T.; Gupta, S. C.
1974-01-01
The quantizer problem is first considered. The conditions under which the uniform white sequence model for the quantizer error is valid are established independent of the sampling rate. An equivalent spectral density is defined for the quantizer error resulting in an effective SNR value. This effective SNR may be used to determine quantized performance from infinitely fine quantized results. Attention is given to sampling rate considerations. Sampling rate characteristics of the digital phase-locked loop (DPLL) structure are investigated for the infinitely fine quantized system. The predicted phase error variance equation is examined as a function of the sampling rate. Simulation results are presented and a method is described which enables the minimum required sampling rate to be determined from the predicted phase error variance equations.
Yang, Jie; Weng, Wenguo; Wang, Faming; Song, Guowen
2017-05-01
This paper aims to integrate a human thermoregulatory model with a clothing model to predict core and skin temperatures. The human thermoregulatory model, consisting of an active system and a passive system, was used to determine the thermoregulation and heat exchanges within the body. The clothing model simulated heat and moisture transfer from the human skin to the environment through the microenvironment and fabric. In this clothing model, the air gap between skin and clothing, as well as clothing properties such as thickness, thermal conductivity, density, porosity, and tortuosity were taken into consideration. The simulated core and mean skin temperatures were compared to the published experimental results of subject tests at three levels of ambient temperatures of 20 °C, 30 °C, and 40 °C. Although lower signal-to-noise-ratio was observed, the developed model demonstrated positive performance at predicting core temperatures with a maximum difference between the simulations and measurements of no more than 0.43 °C. Generally, the current model predicted the mean skin temperatures with reasonable accuracy. It could be applied to predict human physiological responses and assess thermal comfort and heat stress. Copyright © 2017 Elsevier Ltd. All rights reserved.
In silico models for the prediction of dose-dependent human hepatotoxicity
NASA Astrophysics Data System (ADS)
Cheng, Ailan; Dixon, Steven L.
2003-12-01
The liver is extremely vulnerable to the effects of xenobiotics due to its critical role in metabolism. Drug-induced hepatotoxicity may involve any number of different liver injuries, some of which lead to organ failure and, ultimately, patient death. Understandably, liver toxicity is one of the most important dose-limiting considerations in the drug development cycle, yet there remains a serious shortage of methods to predict hepatotoxicity from chemical structure. We discuss our latest findings in this area and present a new, fully general in silico model which is able to predict the occurrence of dose-dependent human hepatotoxicity with greater than 80% accuracy. Utilizing an ensemble recursive partitioning approach, the model classifies compounds as toxic or non-toxic and provides a confidence level to indicate which predictions are most likely to be correct. Only 2D structural information is required and predictions can be made quite rapidly, so this approach is entirely appropriate for data mining applications and for profiling large synthetic and/or virtual libraries.
In silico prediction of drug-induced myelotoxicity by using Naïve Bayes method.
Zhang, Hui; Yu, Peng; Zhang, Teng-Guo; Kang, Yan-Li; Zhao, Xiao; Li, Yuan-Yuan; He, Jia-Hui; Zhang, Ji
2015-11-01
Drug-induced myelotoxicity usually leads to decrease the production of platelets, red cells, and white cells. Thus, early identification and characterization of myelotoxicity hazard in drug development is very necessary. The purpose of this investigation was to develop a prediction model of drug-induced myelotoxicity by using a Naïve Bayes classifier. For comparison, other prediction models based on support vector machine and single-hidden-layer feed-forward neural network methods were also established. Among all the prediction models, the Naïve Bayes classification model showed the best prediction performance, which offered an average overall prediction accuracy of [Formula: see text] for the training set and [Formula: see text] for the external test set. The significant contributions of this study are that we first developed a Naïve Bayes classification model of drug-induced myelotoxicity adverse effect using a larger scale dataset, which could be employed for the prediction of drug-induced myelotoxicity. In addition, several important molecular descriptors and substructures of myelotoxic compounds have been identified, which should be taken into consideration in the design of new candidate compounds to produce safer and more effective drugs, ultimately reducing the attrition rate in later stages of drug development.
Jet Noise Modeling for Supersonic Business Jet Application
NASA Technical Reports Server (NTRS)
Stone, James R.; Krejsa, Eugene A.; Clark, Bruce J.
2004-01-01
This document describes the development of an improved predictive model for coannular jet noise, including noise suppression modifications applicable to small supersonic-cruise aircraft such as the Supersonic Business Jet (SBJ), for NASA Langley Research Center (LaRC). For such aircraft a wide range of propulsion and integration options are under consideration. Thus there is a need for very versatile design tools, including a noise prediction model. The approach used is similar to that used with great success by the Modern Technologies Corporation (MTC) in developing a noise prediction model for two-dimensional mixer ejector (2DME) nozzles under the High Speed Research Program and in developing a more recent model for coannular nozzles over a wide range of conditions. If highly suppressed configurations are ultimately required, the 2DME model is expected to provide reasonable prediction for these smaller scales, although this has not been demonstrated. It is considered likely that more modest suppression approaches, such as dual stream nozzles featuring chevron or chute suppressors, perhaps in conjunction with inverted velocity profiles (IVP), will be sufficient for the SBJ.
Park, Hahnbeom; Bradley, Philip; Greisen, Per; Liu, Yuan; Mulligan, Vikram Khipple; Kim, David E.; Baker, David; DiMaio, Frank
2017-01-01
Most biomolecular modeling energy functions for structure prediction, sequence design, and molecular docking, have been parameterized using existing macromolecular structural data; this contrasts molecular mechanics force fields which are largely optimized using small-molecule data. In this study, we describe an integrated method that enables optimization of a biomolecular modeling energy function simultaneously against small-molecule thermodynamic data and high-resolution macromolecular structural data. We use this approach to develop a next-generation Rosetta energy function that utilizes a new anisotropic implicit solvation model, and an improved electrostatics and Lennard-Jones model, illustrating how energy functions can be considerably improved in their ability to describe large-scale energy landscapes by incorporating both small-molecule and macromolecule data. The energy function improves performance in a wide range of protein structure prediction challenges, including monomeric structure prediction, protein-protein and protein-ligand docking, protein sequence design, and prediction of the free energy changes by mutation, while reasonably recapitulating small-molecule thermodynamic properties. PMID:27766851
Thermal modelling of various thermal barrier coatings in a high heat flux rocket engine
NASA Technical Reports Server (NTRS)
Nesbitt, James A.
1989-01-01
Traditional Air Plasma Sprayed (APS) ZrO2-Y2O3 Thermal Barrier Coatings (TBC's) and Low Pressure Plasma Sprayed (LPPS) ZrO2-Y2O3/Ni-Cr-Al-Y cermet coatings were tested in a H2/O2 rocked engine. The traditional ZrO2-Y2O3 (TBC's) showed considerable metal temperature reductions during testing in the hydrogen-rich environment. A thermal model was developed to predict the thermal response of the tubes with the various coatings. Good agreement was observed between predicted temperatures and measured temperatures at the inner wall of the tube and in the metal near the coating/metal interface. The thermal model was also used to examine the effect of the differences in the reported values of the thermal conductivity of plasma sprayed ZrO2-Y2O3 ceramic coatings, the effect of 100 micron (0.004 in.) thick metallic bond coat, the effect of tangential heat transfer around the tube, and the effect or radiation from the surface of the ceramic coating. It was shown that for the short duration testing in the rocket engine, the most important of these considerations was the effect of the uncertainty in the thermal conductivity of temperatures (greater than 100 C) predicted in the tube. The thermal model was also used to predict the thermal response of the coated rod in order to quantify the difference in the metal temperatures between the two substrate geometries and to explain the previously-observed increased life of coatings on rods over that on tubes. A thermal model was also developed to predict heat transfer to the leading edge of High Pressure Fuel Turbopump (HPFTP) blades during start-up of the space shuttle main engines. The ability of various TBC's to reduce metal temperatures during the two thermal excursions occurring on start-up was predicted. Temperature reductions of 150 to 470 C were predicted for 165 micron (0.0065 in.) coatings for the greater of the two thermal excursions.
The effects of geometric uncertainties on computational modelling of knee biomechanics
NASA Astrophysics Data System (ADS)
Meng, Qingen; Fisher, John; Wilcox, Ruth
2017-08-01
The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models.
Generalized plasma skimming model for cells and drug carriers in the microvasculature.
Lee, Tae-Rin; Yoo, Sung Sic; Yang, Jiho
2017-04-01
In microvascular transport, where both blood and drug carriers are involved, plasma skimming has a key role on changing hematocrit level and drug carrier concentration in capillary beds after continuous vessel bifurcation in the microvasculature. While there have been numerous studies on modeling the plasma skimming of blood, previous works lacked in consideration of its interaction with drug carriers. In this paper, a generalized plasma skimming model is suggested to predict the redistributions of both the cells and drug carriers at each bifurcation. In order to examine its applicability, this new model was applied on a single bifurcation system to predict the redistribution of red blood cells and drug carriers. Furthermore, this model was tested at microvascular network level under different plasma skimming conditions for predicting the concentration of drug carriers. Based on these results, the applicability of this generalized plasma skimming model is fully discussed and future works along with the model's limitations are summarized.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
NASA Astrophysics Data System (ADS)
Möller, Peter; Pfeiffer, Bernd; Kratz, Karl-Ludwig
2003-05-01
Recent compilations of experimental gross β-decay properties, i.e., half-lives (T1/2) and neutron-emission probabilities (Pn), are compared to improved global macroscopic-microscopic model predictions. The model combines calculations within the quasiparticle (QP) random-phase approximation for the Gamow-Teller (GT) part with an empirical spreading of the QP strength and the gross theory for the first-forbidden part of β- decay. Nuclear masses are either taken from the 1995 data compilation of Audi et al., when available, otherwise from the finite-range droplet model. Especially for spherical and neutron-(sub-)magic isotopes a considerable improvement compared to our earlier predictions for pure GT decay (ADNDT, 1997) is observed. T1/2 and Pn values up to the neutron drip line have been used in r-process calculations within the classical “waiting-point” approximation. With the new nuclear-physics input, a considerable speeding-up of the r-matter flow is observed, in particular at those r-abundance peaks which are related to magic neutron-shell closures.
GUT-inspired supersymmetric model for h → γ γ and the muon g - 2
Ajaib, M. Adeel; Gogoladze, Ilia; Shafi, Qaisar
2015-05-06
We study a grand unified theories inspired supersymmetric model with nonuniversal gaugino masses that can explain the observed muon g-2 anomaly while simultaneously accommodating an enhancement or suppression in the h→γγ decay channel. In order to accommodate these observations and m h≅125 to 126 GeV, the model requires a spectrum consisting of relatively light sleptons whereas the colored sparticles are heavy. The predicted stau mass range corresponding to R γγ≥1.1 is 100 GeV≲m τ˜≲200 GeV. The constraint on the slepton masses, particularly on the smuons, arising from considerations of muon g-2 is somewhat milder. The slepton masses in this casemore » are predicted to lie in the few hundred GeV range. The colored sparticles turn out to be considerably heavier with m g˜≳4.5 TeV and m t˜₁≳3.5 TeV, which makes it challenging for these to be observed at the 14 TeV LHC.« less
A Stochastic Model of Plausibility in Live Virtual Constructive Environments
2017-09-14
objective in virtual environment research and design is the maintenance of adequate consistency levels in the face of limited system resources such as...provides some commentary with regard to system design considerations and future research directions. II. SYSTEM MODEL DVEs are often designed as a...exceed the system’s requirements. Research into predictive models of virtual environment consistency is needed to provide designers the tools to
A simplified approach to quasi-linear viscoelastic modeling
Nekouzadeh, Ali; Pryse, Kenneth M.; Elson, Elliot L.; Genin, Guy M.
2007-01-01
The fitting of quasi-linear viscoelastic (QLV) constitutive models to material data often involves somewhat cumbersome numerical convolution. A new approach to treating quasi-linearity in one dimension is described and applied to characterize the behavior of reconstituted collagen. This approach is based on a new principle for including nonlinearity and requires considerably less computation than other comparable models for both model calibration and response prediction, especially for smoothly applied stretching. Additionally, the approach allows relaxation to adapt with the strain history. The modeling approach is demonstrated through tests on pure reconstituted collagen. Sequences of “ramp-and-hold” stretching tests were applied to rectangular collagen specimens. The relaxation force data from the “hold” was used to calibrate a new “adaptive QLV model” and several models from literature, and the force data from the “ramp” was used to check the accuracy of model predictions. Additionally, the ability of the models to predict the force response on a reloading of the specimen was assessed. The “adaptive QLV model” based on this new approach predicts collagen behavior comparably to or better than existing models, with much less computation. PMID:17499254
DOE Office of Scientific and Technical Information (OSTI.GOV)
Redding, Laurel E.; Sohn, Michael D.; McKone, Thomas E.
2008-03-01
We developed a physiologically based pharmacokinetic model of PCB 153 in women, and predict its transfer via lactation to infants. The model is the first human, population-scale lactational model for PCB 153. Data in the literature provided estimates for model development and for performance assessment. Physiological parameters were taken from a cohort in Taiwan and from reference values in the literature. We estimated partition coefficients based on chemical structure and the lipid content in various body tissues. Using exposure data in Japan, we predicted acquired body burden of PCB 153 at an average childbearing age of 25 years and comparemore » predictions to measurements from studies in multiple countries. Forward-model predictions agree well with human biomonitoring measurements, as represented by summary statistics and uncertainty estimates. The model successfully describes the range of possible PCB 153 dispositions in maternal milk, suggesting a promising option for back estimating doses for various populations. One example of reverse dosimetry modeling was attempted using our PBPK model for possible exposure scenarios in Canadian Inuits who had the highest level of PCB 153 in their milk in the world.« less
NASA Astrophysics Data System (ADS)
Totani, T.; Takeuchi, T. T.
2001-12-01
A new model of infrared galaxy counts and the cosmic background radiation (CBR) is developed by extending a model for optical/near-infrared galaxies. Important new characteristics of this model are that mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies, and that the big grain dust temperature T dust is calculated based on a physical consideration for energy balance, rather than using the empirical relation between T dust and total infrared luminosity L IR found in local galaxies, which has been employed in most of previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, L IR-T dust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μ m) and CBR by this model. We found considerably different results from most of previous works based on the empirical L IR-T dust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40--80K). This indicates that intense starbursts of forming elliptical galaxies should have occurred at z ~ 2--3, in contrast to the previous results that significant starbursts beyond z ~ 1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE\\ detections of FIR CBR. The authors thank the financial support by the Japan Society for Promotion of Science.
A predictive model for assistive technology adoption for people with dementia.
Zhang, Shuai; McClean, Sally I; Nugent, Chris D; Donnelly, Mark P; Galway, Leo; Scotney, Bryan W; Cleland, Ian
2014-01-01
Assistive technology has the potential to enhance the level of independence of people with dementia, thereby increasing the possibility of supporting home-based care. In general, people with dementia are reluctant to change; therefore, it is important that suitable assistive technologies are selected for them. Consequently, the development of predictive models that are able to determine a person's potential to adopt a particular technology is desirable. In this paper, a predictive adoption model for a mobile phone-based video streaming system, developed for people with dementia, is presented. Taking into consideration characteristics related to a person's ability, living arrangements, and preferences, this paper discusses the development of predictive models, which were based on a number of carefully selected data mining algorithms for classification. For each, the learning on different relevant features for technology adoption has been tested, in conjunction with handling the imbalance of available data for output classes. Given our focus on providing predictive tools that could be used and interpreted by healthcare professionals, models with ease-of-use, intuitive understanding, and clear decision making processes are preferred. Predictive models have, therefore, been evaluated on a multi-criterion basis: in terms of their prediction performance, robustness, bias with regard to two types of errors and usability. Overall, the model derived from incorporating a k-Nearest-Neighbour algorithm using seven features was found to be the optimal classifier of assistive technology adoption for people with dementia (prediction accuracy 0.84 ± 0.0242).
A neuronal model of predictive coding accounting for the mismatch negativity.
Wacongne, Catherine; Changeux, Jean-Pierre; Dehaene, Stanislas
2012-03-14
The mismatch negativity (MMN) is thought to index the activation of specialized neural networks for active prediction and deviance detection. However, a detailed neuronal model of the neurobiological mechanisms underlying the MMN is still lacking, and its computational foundations remain debated. We propose here a detailed neuronal model of auditory cortex, based on predictive coding, that accounts for the critical features of MMN. The model is entirely composed of spiking excitatory and inhibitory neurons interconnected in a layered cortical architecture with distinct input, predictive, and prediction error units. A spike-timing dependent learning rule, relying upon NMDA receptor synaptic transmission, allows the network to adjust its internal predictions and use a memory of the recent past inputs to anticipate on future stimuli based on transition statistics. We demonstrate that this simple architecture can account for the major empirical properties of the MMN. These include a frequency-dependent response to rare deviants, a response to unexpected repeats in alternating sequences (ABABAA…), a lack of consideration of the global sequence context, a response to sound omission, and a sensitivity of the MMN to NMDA receptor antagonists. Novel predictions are presented, and a new magnetoencephalography experiment in healthy human subjects is presented that validates our key hypothesis: the MMN results from active cortical prediction rather than passive synaptic habituation.
ERIC Educational Resources Information Center
Graham, Carroll M.; Scott, Aaron J.; Nafukho, Fredrick M.
2008-01-01
While theoretical models aimed at explaining or predicting employee turnover outcomes have been developed, minimal consideration has been given to the same task regarding safety, often measured as the probability of a crash in a given time frame. The present literature review identifies four constructs from turnover literature, which are believed…
Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,
2013-01-01
Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.
Measurement and simulation of deformation and stresses in steel casting
NASA Astrophysics Data System (ADS)
Galles, D.; Monroe, C. A.; Beckermann, C.
2012-07-01
Experiments are conducted to measure displacements and forces during casting of a steel bar in a sand mold. In some experiments the bar is allowed to contract freely, while in others the bar is manually strained using embedded rods connected to a frame. Solidification and cooling of the experimental castings are simulated using a commercial code, and good agreement between measured and predicted temperatures is obtained. The deformations and stresses in the experiments are simulated using an elasto-viscoplastic finite-element model. The high temperature mechanical properties are estimated from data available in the literature. The mush is modeled using porous metal plasticity theory, where the coherency and coalescence solid fraction are taken into account. Good agreement is obtained between measured and predicted displacements and forces. The results shed considerable light on the modeling of stresses in steel casting and help in developing more accurate models for predicting hot tears and casting distortions.
Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction
NASA Astrophysics Data System (ADS)
Mach, J. C.; Budrow, C. J.; Pagan, D. C.; Ruff, J. P. C.; Park, J.-S.; Okasinski, J.; Beaudoin, A. J.; Miller, M. P.
2017-05-01
Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present work, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to develop significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. The experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.
Modeling the spatiotemporal dynamics of light and heat propagation for in vivo optogenetics
Stujenske, Joseph M.; Spellman, Timothy; Gordon, Joshua A.
2015-01-01
Summary Despite the increasing use of optogenetics in vivo, the effects of direct light exposure to brain tissue are understudied. Of particular concern is the potential for heat induced by prolonged optical stimulation. We demonstrate that high intensity light, delivered through an optical fiber, is capable of elevating firing rate locally, even in the absence of opsin expression. Predicting the severity and spatial extent of any temperature increase during optogenetic stimulation is therefore of considerable importance. Here we describe a realistic model that simulates light and heat propagation during optogenetic experiments. We validated the model by comparing predicted and measured temperature changes in vivo. We further demonstrate the utility of this model by comparing predictions for various wavelengths of light and fiber sizes, as well as testing methods for reducing heat effects on neural targets in vivo. PMID:26166563
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Baker, Stuart G
2018-02-01
When using risk prediction models, an important consideration is weighing performance against the cost (monetary and harms) of ascertaining predictors. The minimum test tradeoff (MTT) for ruling out a model is the minimum number of all-predictor ascertainments per correct prediction to yield a positive overall expected utility. The MTT for ruling out an added predictor is the minimum number of added-predictor ascertainments per correct prediction to yield a positive overall expected utility. An approximation to the MTT for ruling out a model is 1/[P (H(AUC model )], where H(AUC) = AUC - {½ (1-AUC)} ½ , AUC is the area under the receiver operating characteristic (ROC) curve, and P is the probability of the predicted event in the target population. An approximation to the MTT for ruling out an added predictor is 1 /[P {(H(AUC Model:2 ) - H(AUC Model:1 )], where Model 2 includes an added predictor relative to Model 1. The latter approximation requires the Tangent Condition that the true positive rate at the point on the ROC curve with a slope of 1 is larger for Model 2 than Model 1. These approximations are suitable for back-of-the-envelope calculations. For example, in a study predicting the risk of invasive breast cancer, Model 2 adds to the predictors in Model 1 a set of 7 single nucleotide polymorphisms (SNPs). Based on the AUCs and the Tangent Condition, an MTT of 7200 was computed, which indicates that 7200 sets of SNPs are needed for every correct prediction of breast cancer to yield a positive overall expected utility. If ascertaining the SNPs costs $500, this MTT suggests that SNP ascertainment is not likely worthwhile for this risk prediction.
Thin-slice vision: inference of confidence measure from perceptual video quality
NASA Astrophysics Data System (ADS)
Hameed, Abdul; Balas, Benjamin; Dai, Rui
2016-11-01
There has been considerable research on thin-slice judgments, but no study has demonstrated the predictive validity of confidence measures when assessors watch videos acquired from communication systems, in which the perceptual quality of videos could be degraded by limited bandwidth and unreliable network conditions. This paper studies the relationship between high-level thin-slice judgments of human behavior and factors that contribute to perceptual video quality. Based on a large number of subjective test results, it has been found that the confidence of a single individual present in all the videos, called speaker's confidence (SC), could be predicted by a list of features that contribute to perceptual video quality. Two prediction models, one based on artificial neural network and the other based on a decision tree, were built to predict SC. Experimental results have shown that both prediction models can result in high correlation measures.
A Next Generation Atmospheric Prediction System for the Navy
2015-09-30
by DOE and NSF , while the HiRAM system has primarily been supported by NOAA, although both models have leveraged considerably from indirect and...Neptune scalability (blue line) with the increasing number of cores compared to a perfect simulation rate (black line). Horizontal distance (km...draw on the community expertise with both MPAS and HIRAM. NRL is a no- cost collaborator with a number of proposals for the ONR Seasonal Prediction
Offset-Free Model Predictive Control of Open Water Channel Based on Moving Horizon Estimation
NASA Astrophysics Data System (ADS)
Ekin Aydin, Boran; Rutten, Martine
2016-04-01
Model predictive control (MPC) is a powerful control option which is increasingly used by operational water managers for managing water systems. The explicit consideration of constraints and multi-objective management are important features of MPC. However, due to the water loss in open water systems by seepage, leakage and evaporation a mismatch between the model and the real system will be created. These mismatch affects the performance of MPC and creates an offset from the reference set point of the water level. We present model predictive control based on moving horizon estimation (MHE-MPC) to achieve offset free control of water level for open water canals. MHE-MPC uses the past predictions of the model and the past measurements of the system to estimate unknown disturbances and the offset in the controlled water level is systematically removed. We numerically tested MHE-MPC on an accurate hydro-dynamic model of the laboratory canal UPC-PAC located in Barcelona. In addition, we also used well known disturbance modeling offset free control scheme for the same test case. Simulation experiments on a single canal reach show that MHE-MPC outperforms disturbance modeling offset free control scheme.
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
This abstract will be submitted for the consideration of either a poster or platform presentation at the 2018 Annual Carolinas Society of Environmental Toxicology and Chemistry held in Research Triangle Park, NC April 25-27th.
How to make predictions about future infectious disease risks
Woolhouse, Mark
2011-01-01
Formal, quantitative approaches are now widely used to make predictions about the likelihood of an infectious disease outbreak, how the disease will spread, and how to control it. Several well-established methodologies are available, including risk factor analysis, risk modelling and dynamic modelling. Even so, predictive modelling is very much the ‘art of the possible’, which tends to drive research effort towards some areas and away from others which may be at least as important. Building on the undoubted success of quantitative modelling of the epidemiology and control of human and animal diseases such as AIDS, influenza, foot-and-mouth disease and BSE, attention needs to be paid to developing a more holistic framework that captures the role of the underlying drivers of disease risks, from demography and behaviour to land use and climate change. At the same time, there is still considerable room for improvement in how quantitative analyses and their outputs are communicated to policy makers and other stakeholders. A starting point would be generally accepted guidelines for ‘good practice’ for the development and the use of predictive models. PMID:21624924
Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.
2017-01-01
Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800
NASA Astrophysics Data System (ADS)
Ogden, F. L.
2017-12-01
HIgh performance computing and the widespread availabilities of geospatial physiographic and forcing datasets have enabled consideration of flood impact predictions with longer lead times and more detailed spatial descriptions. We are now considering multi-hour flash flood forecast lead times at the subdivision level in so-called hydroblind regions away from the National Hydrography network. However, the computational demands of such models are high, necessitating a nested simulation approach. Research on hyper-resolution hydrologic modeling over the past three decades have illustrated some fundamental limits on predictability that are simultaneously related to runoff generation mechanism(s), antecedent conditions, rates and total amounts of precipitation, discretization of the model domain, and complexity or completeness of the model formulation. This latter point is an acknowledgement that in some ways hydrologic understanding in key areas related to land use, land cover, tillage practices, seasonality, and biological effects has some glaring deficiencies. This presentation represents a review of what is known related to the interacting effects of precipitation amount, model spatial discretization, antecedent conditions, physiographic characteristics and model formulation completeness for runoff predictions. These interactions define a region in multidimensional forcing, parameter and process space where there are in some cases clear limits on predictability, and in other cases diminished uncertainty.
Plasticity models of material variability based on uncertainty quantification techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Reese E.; Rizzi, Francesco; Boyce, Brad
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.
2016-07-01
Fiori et al. (2015) examine the predictive capabilities of (among others) two "proxy" non-Fickian transport models, MRMT (Multi-Rate Mass Transfer) and CTRW (Continuous-Time Random Walk). In particular, they compare proxy model predictions of mean breakthrough curves (BTCs) at a sequence of control planes with near-ergodic BTCs generated through two- and three-dimensional simulations of nonreactive, mean-uniform advective transport in single realizations of stationary, randomly heterogeneous porous media. The authors find fitted proxy model parameters to be nonunique and devoid of clear physical meaning. This notwithstanding, they conclude optimistically that "i. Fitting the proxy models to match the BTC at [one control plane] automatically ensures prediction at downstream control planes [and thus] ii. … the measured BTC can be used directly for prediction, with no need to use models underlain by fitting." I show that (a) the authors' findings follow directly from (and thus confirm) theoretical considerations discussed earlier by Neuman and Tartakovsky (2009), which (b) additionally demonstrate that proxy models will lack similar predictive capabilities under more realistic, non-Markovian flow and transport conditions that prevail under flow through nonstationary (e.g., multiscale) media in the presence of boundaries and/or nonuniformly distributed sources, and/or when flow/transport are conditioned on measurements.
Pan, Feng; Reifsnider, Odette; Zheng, Ying; Proskorovsky, Irina; Li, Tracy; He, Jianming; Sorensen, Sonja V
2018-04-01
Treatment landscape in prostate cancer has changed dramatically with the emergence of new medicines in the past few years. The traditional survival partition model (SPM) cannot accurately predict long-term clinical outcomes because it is limited by its ability to capture the key consequences associated with this changing treatment paradigm. The objective of this study was to introduce and validate a discrete-event simulation (DES) model for prostate cancer. A DES model was developed to simulate overall survival (OS) and other clinical outcomes based on patient characteristics, treatment received, and disease progression history. We tested and validated this model with clinical trial data from the abiraterone acetate phase III trial (COU-AA-302). The model was constructed with interim data (55% death) and validated with the final data (96% death). Predicted OS values were also compared with those from the SPM. The DES model's predicted time to chemotherapy and OS are highly consistent with the final observed data. The model accurately predicts the OS hazard ratio from the final data cut (predicted: 0.74; 95% confidence interval [CI] 0.64-0.85 and final actual: 0.74; 95% CI 0.6-0.88). The log-rank test to compare the observed and predicted OS curves indicated no statistically significant difference between observed and predicted curves. However, the predictions from the SPM based on interim data deviated significantly from the final data. Our study showed that a DES model with properly developed risk equations presents considerable improvements to the more traditional SPM in flexibility and predictive accuracy of long-term outcomes. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
2015-12-07
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.
In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO 2 and comparingmore » the predictions with experiments.« less
A collider observable QCD axion
Dimopoulos, Savas; Hook, Anson; Huang, Junwu; ...
2016-11-09
Here, we present a model where the QCD axion is at the TeV scale and visible at a collider via its decays. Conformal dynamics and strong CP considerations account for the axion coupling strongly enough to the standard model to be produced as well as the coincidence between the weak scale and the axion mass. The model predicts additional pseudoscalar color octets whose properties are completely determined by the axion properties rendering the theory testable.
In-silico wear prediction for knee replacements--methodology and corroboration.
Strickland, M A; Taylor, M
2009-07-22
The capability to predict in-vivo wear of knee replacements is a valuable pre-clinical analysis tool for implant designers. Traditionally, time-consuming experimental tests provided the principal means of investigating wear. Today, computational models offer an alternative. However, the validity of these models has not been demonstrated across a range of designs and test conditions, and several different formulas are in contention for estimating wear rates, limiting confidence in the predictive power of these in-silico models. This study collates and retrospectively simulates a wide range of experimental wear tests using fast rigid-body computational models with extant wear prediction algorithms, to assess the performance of current in-silico wear prediction tools. The number of tests corroborated gives a broader, more general assessment of the performance of these wear-prediction tools, and provides better estimates of the wear 'constants' used in computational models. High-speed rigid-body modelling allows a range of alternative algorithms to be evaluated. Whilst most cross-shear (CS)-based models perform comparably, the 'A/A+B' wear model appears to offer the best predictive power amongst existing wear algorithms. However, the range and variability of experimental data leaves considerable uncertainty in the results. More experimental data with reduced variability and more detailed reporting of studies will be necessary to corroborate these models with greater confidence. With simulation times reduced to only a few minutes, these models are ideally suited to large-volume 'design of experiment' or probabilistic studies (which are essential if pre-clinical assessment tools are to begin addressing the degree of variation observed clinically and in explanted components).
NASA Astrophysics Data System (ADS)
Totani, Tomonori; Takeuchi, Tsutomu T.
2002-05-01
We give an explanation for the origin of various properties observed in local infrared galaxies and make predictions for galaxy counts and cosmic background radiation (CBR) using a new model extended from that for optical/near-infrared galaxies. Important new characteristics of this study are that (1) mass scale dependence of dust extinction is introduced based on the size-luminosity relation of optical galaxies and that (2) the large-grain dust temperature Tdust is calculated based on a physical consideration for energy balance rather than by using the empirical relation between Tdust and total infrared luminosity LIR found in local galaxies, which has been employed in most previous works. Consequently, the local properties of infrared galaxies, i.e., optical/infrared luminosity ratios, LIR-Tdust correlation, and infrared luminosity function are outputs predicted by the model, while these have been inputs in a number of previous models. Our model indeed reproduces these local properties reasonably well. Then we make predictions for faint infrared counts (in 15, 60, 90, 170, 450, and 850 μm) and CBR using this model. We found results considerably different from those of most previous works based on the empirical LIR-Tdust relation; especially, it is shown that the dust temperature of starbursting primordial elliptical galaxies is expected to be very high (40-80 K), as often seen in starburst galaxies or ultraluminous infrared galaxies in the local and high-z universe. This indicates that intense starbursts of forming elliptical galaxies should have occurred at z~2-3, in contrast to the previous results that significant starbursts beyond z~1 tend to overproduce the far-infrared (FIR) CBR detected by COBE/FIRAS. On the other hand, our model predicts that the mid-infrared (MIR) flux from warm/nonequilibrium dust is relatively weak in such galaxies making FIR CBR, and this effect reconciles the prima facie conflict between the upper limit on MIR CBR from TeV gamma-ray observations and the COBE detections of FIR CBR. The intergalactic optical depth of TeV gamma rays based on our model is also presented.
Pilkington, Sarah M; Crowhurst, Ross; Hilario, Elena; Nardozza, Simona; Fraser, Lena; Peng, Yongyan; Gunaseelan, Kularajathevan; Simpson, Robert; Tahir, Jibran; Deroles, Simon C; Templeton, Kerry; Luo, Zhiwei; Davy, Marcus; Cheng, Canhong; McNeilage, Mark; Scaglione, Davide; Liu, Yifei; Zhang, Qiong; Datson, Paul; De Silva, Nihal; Gardiner, Susan E; Bassett, Heather; Chagné, David; McCallum, John; Dzierzon, Helge; Deng, Cecilia; Wang, Yen-Yi; Barron, Lorna; Manako, Kelvina; Bowen, Judith; Foster, Toshi M; Erridge, Zoe A; Tiffin, Heather; Waite, Chethi N; Davies, Kevin M; Grierson, Ella P; Laing, William A; Kirk, Rebecca; Chen, Xiuyin; Wood, Marion; Montefiori, Mirco; Brummell, David A; Schwinn, Kathy E; Catanach, Andrew; Fullerton, Christina; Li, Dawei; Meiyalaghan, Sathiyamoorthy; Nieuwenhuizen, Niels; Read, Nicola; Prakash, Roneel; Hunter, Don; Zhang, Huaibi; McKenzie, Marian; Knäbel, Mareike; Harris, Alastair; Allan, Andrew C; Gleave, Andrew; Chen, Angela; Janssen, Bart J; Plunkett, Blue; Ampomah-Dwamena, Charles; Voogd, Charlotte; Leif, Davin; Lafferty, Declan; Souleyre, Edwige J F; Varkonyi-Gasic, Erika; Gambi, Francesco; Hanley, Jenny; Yao, Jia-Long; Cheung, Joey; David, Karine M; Warren, Ben; Marsh, Ken; Snowden, Kimberley C; Lin-Wang, Kui; Brian, Lara; Martinez-Sanchez, Marcela; Wang, Mindy; Ileperuma, Nadeesha; Macnee, Nikolai; Campin, Robert; McAtee, Peter; Drummond, Revel S M; Espley, Richard V; Ireland, Hilary S; Wu, Rongmei; Atkinson, Ross G; Karunairetnam, Sakuntala; Bulley, Sean; Chunkath, Shayhan; Hanley, Zac; Storey, Roy; Thrimawithana, Amali H; Thomson, Susan; David, Charles; Testolin, Raffaele; Huang, Hongwen; Hellens, Roger P; Schaffer, Robert J
2018-04-16
Most published genome sequences are drafts, and most are dominated by computational gene prediction. Draft genomes typically incorporate considerable sequence data that are not assigned to chromosomes, and predicted genes without quality confidence measures. The current Actinidia chinensis (kiwifruit) 'Hongyang' draft genome has 164 Mb of sequences unassigned to pseudo-chromosomes, and omissions have been identified in the gene models. A second genome of an A. chinensis (genotype Red5) was fully sequenced. This new sequence resulted in a 554.0 Mb assembly with all but 6 Mb assigned to pseudo-chromosomes. Pseudo-chromosomal comparisons showed a considerable number of translocation events have occurred following a whole genome duplication (WGD) event some consistent with centromeric Robertsonian-like translocations. RNA sequencing data from 12 tissues and ab initio analysis informed a genome-wide manual annotation, using the WebApollo tool. In total, 33,044 gene loci represented by 33,123 isoforms were identified, named and tagged for quality of evidential support. Of these 3114 (9.4%) were identical to a protein within 'Hongyang' The Kiwifruit Information Resource (KIR v2). Some proportion of the differences will be varietal polymorphisms. However, as most computationally predicted Red5 models required manual re-annotation this proportion is expected to be small. The quality of the new gene models was tested by fully sequencing 550 cloned 'Hort16A' cDNAs and comparing with the predicted protein models for Red5 and both the original 'Hongyang' assembly and the revised annotation from KIR v2. Only 48.9% and 63.5% of the cDNAs had a match with 90% identity or better to the original and revised 'Hongyang' annotation, respectively, compared with 90.9% to the Red5 models. Our study highlights the need to take a cautious approach to draft genomes and computationally predicted genes. Our use of the manual annotation tool WebApollo facilitated manual checking and correction of gene models enabling improvement of computational prediction. This utility was especially relevant for certain types of gene families such as the EXPANSIN like genes. Finally, this high quality gene set will supply the kiwifruit and general plant community with a new tool for genomics and other comparative analysis.
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.
Managing Salary Equity. AIR Forum 1981 Paper.
ERIC Educational Resources Information Center
Prather, James E.; Posey, Ellen I.
Technical considerations in the development of a salary equity model based upon regression analysis are reviewed, and a simplified salary prediction equation is examined. Application and communication of the results of the analysis within the existing operational context of a postsecondary institution are also addressed. The literature is…
Innovating Conservation Agriculture: The Case of No-Till Cropping
ERIC Educational Resources Information Center
Coughenour, C. Milton
2003-01-01
The extensive sociological studies of conservation agriculture have provided considerable understanding of farmers' use of conservation practices, but attempts to develop predictive models have failed. Reviews of research findings question the utility of the conceptual and methodological perspectives of prior research. The argument advanced here…
DOT National Transportation Integrated Search
2009-07-01
"Considerable data exists for soils that were tested and documented, both for native properties and : properties with pozzolan stabilization. While the data exists there was no database for the Nebraska : Department of Roads to retrieve this data for...
Nicholas C. Coops; Richard H. Waring; Todd A. Schroeder
2009-01-01
Although long-lived tree species experience considerable environmental variation over their life spans, their geographical distributions reflect sensitivity mainly to mean monthly climatic conditions.We introduce an approach that incorporates a physiologically based growth model to illustrate how a half-dozen tree species differ in their responses to monthly variation...
A Radial Basis Function Approach to Financial Time Series Analysis
1993-12-01
including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data...collection of practical techniques to address these issues for a modeling methodology . Radial Basis Function networks. These techniques in- clude efficient... methodology often then amounts to a careful consideration of the interplay between model complexity and reliability. These will be recurrent themes
Mortality Probability Model III and Simplified Acute Physiology Score II
Vasilevskis, Eduard E.; Kuzniewicz, Michael W.; Cason, Brian A.; Lane, Rondall K.; Dean, Mitzi L.; Clay, Ted; Rennie, Deborah J.; Vittinghoff, Eric; Dudley, R. Adams
2009-01-01
Background: To develop and compare ICU length-of-stay (LOS) risk-adjustment models using three commonly used mortality or LOS prediction models. Methods: Between 2001 and 2004, we performed a retrospective, observational study of 11,295 ICU patients from 35 hospitals in the California Intensive Care Outcomes Project. We compared the accuracy of the following three LOS models: a recalibrated acute physiology and chronic health evaluation (APACHE) IV-LOS model; and models developed using risk factors in the mortality probability model III at zero hours (MPM0) and the simplified acute physiology score (SAPS) II mortality prediction model. We evaluated models by calculating the following: (1) grouped coefficients of determination; (2) differences between observed and predicted LOS across subgroups; and (3) intraclass correlations of observed/expected LOS ratios between models. Results: The grouped coefficients of determination were APACHE IV with coefficients recalibrated to the LOS values of the study cohort (APACHE IVrecal) [R2 = 0.422], mortality probability model III at zero hours (MPM0 III) [R2 = 0.279], and simplified acute physiology score (SAPS II) [R2 = 0.008]. For each decile of predicted ICU LOS, the mean predicted LOS vs the observed LOS was significantly different (p ≤ 0.05) for three, two, and six deciles using APACHE IVrecal, MPM0 III, and SAPS II, respectively. Plots of the predicted vs the observed LOS ratios of the hospitals revealed a threefold variation in LOS among hospitals with high model correlations. Conclusions: APACHE IV and MPM0 III were more accurate than SAPS II for the prediction of ICU LOS. APACHE IV is the most accurate and best calibrated model. Although it is less accurate, MPM0 III may be a reasonable option if the data collection burden or the treatment effect bias is a consideration. PMID:19363210
Macleod, John; Metcalfe, Chris; Smith, George Davey; Hart, Carole
2007-09-01
To assess the value of psychosocial risk factors in discriminating between individuals at higher and lower risk of coronary heart disease, using risk prediction equations. Prospective observational study. Scotland. 5191 employed men aged 35 to 64 years and free of coronary heart disease at study enrollment Area under receiver operating characteristic (ROC) curves for risk prediction equations including different risk factors for coronary heart disease. During the first 10 years of follow up, 203 men died of coronary heart disease and a further 200 were admitted to hospital with this diagnosis. Area under the ROC curve for the standard Framingham coronary risk factors was 74.5%. Addition of "vital exhaustion" and psychological stress led to areas under the ROC curve of 74.5% and 74.6%, respectively. Addition of current social class and lifetime social class to the standard Framingham equation gave areas under the ROC curve of 74.6% and 74.9%, respectively. In no case was there strong evidence for improved discrimination of the model containing the novel risk factor over the standard model. Consideration of psychosocial risk factors, including those that are strong independent predictors of heart disease, does not substantially influence the ability of risk prediction tools to discriminate between individuals at higher and lower risk of coronary heart disease.
Kasthurirathne, Suranga N; Vest, Joshua R; Menachemi, Nir; Halverson, Paul K; Grannis, Shaun J
2018-01-01
A growing variety of diverse data sources is emerging to better inform health care delivery and health outcomes. We sought to evaluate the capacity for clinical, socioeconomic, and public health data sources to predict the need for various social service referrals among patients at a safety-net hospital. We integrated patient clinical data and community-level data representing patients' social determinants of health (SDH) obtained from multiple sources to build random forest decision models to predict the need for any, mental health, dietitian, social work, or other SDH service referrals. To assess the impact of SDH on improving performance, we built separate decision models using clinical and SDH determinants and clinical data only. Decision models predicting the need for any, mental health, and dietitian referrals yielded sensitivity, specificity, and accuracy measures ranging between 60% and 75%. Specificity and accuracy scores for social work and other SDH services ranged between 67% and 77%, while sensitivity scores were between 50% and 63%. Area under the receiver operating characteristic curve values for the decision models ranged between 70% and 78%. Models for predicting the need for any services reported positive predictive values between 65% and 73%. Positive predictive values for predicting individual outcomes were below 40%. The need for various social service referrals can be predicted with considerable accuracy using a wide range of readily available clinical and community data that measure socioeconomic and public health conditions. While the use of SDH did not result in significant performance improvements, our approach represents a novel and important application of risk predictive modeling. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Electrolytic hydrogen production: An analysis and review
NASA Technical Reports Server (NTRS)
Evangelista, J.; Phillips, B.; Gordon, L.
1975-01-01
The thermodynamics of water electrolysis cells is presented, followed by a review of current and future technology of commercial cells. The irreversibilities involved are analyzed and the resulting equations assembled into a computer simulation model of electrolysis cell efficiency. The model is tested by comparing predictions based on the model to actual commercial cell performance, and a parametric investigation of operating conditions is performed. Finally, the simulation model is applied to a study of electrolysis cell dynamics through consideration of an ideal pulsed electrolyzer.
NASA Technical Reports Server (NTRS)
Pinho, Silvestre T.; Davila, C. G.; Camanho, P. P.; Iannucci, L.; Robinson, P.
2005-01-01
A set of three-dimensional failure criteria for laminated fiber-reinforced composites, denoted LaRC04, is proposed. The criteria are based on physical models for each failure mode and take into consideration non-linear matrix shear behaviour. The model for matrix compressive failure is based on the Mohr-Coulomb criterion and it predicts the fracture angle. Fiber kinking is triggered by an initial fiber misalignment angle and by the rotation of the fibers during compressive loading. The plane of fiber kinking is predicted by the model. LaRC04 consists of 6 expressions that can be used directly for design purposes. Several applications involving a broad range of load combinations are presented and compared to experimental data and other existing criteria. Predictions using LaRC04 correlate well with the experimental data, arguably better than most existing criteria. The good correlation seems to be attributable to the physical soundness of the underlying failure models.
Improvement and Application of the Softened Strut-and-Tie Model
NASA Astrophysics Data System (ADS)
Fan, Guoxi; Wang, Debin; Diao, Yuhong; Shang, Huaishuai; Tang, Xiaocheng; Sun, Hai
2017-11-01
Previous experimental researches indicate that reinforced concrete beam-column joints play an important role in the mechanical properties of moment resisting frame structures, so as to require proper design. The aims of this paper are to predict the joint carrying capacity and cracks development theoretically. Thus, a rational model needs to be developed. Based on the former considerations, the softened strut-and-tie model is selected to be introduced and analyzed. Four adjustments including modifications of the depth of the diagonal strut, the inclination angle of diagonal compression strut, the smeared stress of mild steel bars embedded in concrete, as well as the softening coefficient are made. After that, the carrying capacity of beam-column joint and cracks development are predicted using the improved softened strut-and-tie model. Based on the test results, it is not difficult to find that the improved softened strut-and-tie model can be used to predict the joint carrying capacity and cracks development with sufficient accuracy.
The importance of radiation for semiempirical water-use efficiency models
NASA Astrophysics Data System (ADS)
Boese, Sven; Jung, Martin; Carvalhais, Nuno; Reichstein, Markus
2017-06-01
Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that this intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39-47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.
Uncertainty analysis of a groundwater flow model in East-central Florida.
Sepúlveda, Nicasio; Doherty, John
2015-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan Aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The "Null Space Monte Carlo" method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model's capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial or temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context. © 2014, National Ground Water Association.
Predictive accuracy of a ground-water model--Lessons from a postaudit
Konikow, Leonard F.
1986-01-01
Hydrogeologic studies commonly include the development, calibration, and application of a deterministic simulation model. To help assess the value of using such models to make predictions, a postaudit was conducted on a previously studied area in the Salt River and lower Santa Cruz River basins in central Arizona. A deterministic, distributed-parameter model of the ground-water system in these alluvial basins was calibrated by Anderson (1968) using about 40 years of data (1923–64). The calibrated model was then used to predict future water-level changes during the next 10 years (1965–74). Examination of actual water-level changes in 77 wells from 1965–74 indicates a poor correlation between observed and predicted water-level changes. The differences have a mean of 73 ft that is, predicted declines consistently exceeded those observed and a standard deviation of 47 ft. The bias in the predicted water-level change can be accounted for by the large error in the assumed total pumpage during the prediction period. However, the spatial distribution of errors in predicted water-level change does not correlate with the spatial distribution of errors in pumpage. Consequently, the lack of precision probably is not related only to errors in assumed pumpage, but may indicate the presence of other sources of error in the model, such as the two-dimensional representation of a three-dimensional problem or the lack of consideration of land-subsidence processes. This type of postaudit is a valuable method of verifying a model, and an evaluation of predictive errors can provide an increased understanding of the system and aid in assessing the value of undertaking development of a revised model.
The Threshold Bias Model: A Mathematical Model for the Nomothetic Approach of Suicide
Folly, Walter Sydney Dutra
2011-01-01
Background Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. Methodology/Principal Findings A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. Conclusions/Significance The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health. PMID:21909431
The threshold bias model: a mathematical model for the nomothetic approach of suicide.
Folly, Walter Sydney Dutra
2011-01-01
Comparative and predictive analyses of suicide data from different countries are difficult to perform due to varying approaches and the lack of comparative parameters. A simple model (the Threshold Bias Model) was tested for comparative and predictive analyses of suicide rates by age. The model comprises of a six parameter distribution that was applied to the USA suicide rates by age for the years 2001 and 2002. Posteriorly, linear extrapolations are performed of the parameter values previously obtained for these years in order to estimate the values corresponding to the year 2003. The calculated distributions agreed reasonably well with the aggregate data. The model was also used to determine the age above which suicide rates become statistically observable in USA, Brazil and Sri Lanka. The Threshold Bias Model has considerable potential applications in demographic studies of suicide. Moreover, since the model can be used to predict the evolution of suicide rates based on information extracted from past data, it will be of great interest to suicidologists and other researchers in the field of mental health.
NASA Technical Reports Server (NTRS)
Cess, R. D.; Zhang, M. H.; Zhou, Y.; Jing, X.; Dvortsov, V.
1996-01-01
To investigate the absorption of shortwave radiation by clouds, we have collocated satellite and surface measurements of shortwave radiation at several locations. Considerable effort has been directed toward understanding and minimizing sampling errors caused by the satellite measurements being instantaneous and over a grid that is much larger than the field of view of an upward facing surface pyranometer. The collocated data indicate that clouds absorb considerably more shortwave radiation than is predicted by theoretical models. This is consistent with the finding from both satellite and aircraft measurements that observed clouds are darker than model clouds. In the limit of thick clouds, observed top-of-the-atmosphere albedos do not exceed a value of 0.7, whereas in models the maximum albedo can be 0.8.
Spatially explicit modeling of particulate nutrient flux in Large global rivers
NASA Astrophysics Data System (ADS)
Cohen, S.; Kettner, A.; Mayorga, E.; Harrison, J. A.
2017-12-01
Water, sediment, nutrient and carbon fluxes along river networks have undergone considerable alterations in response to anthropogenic and climatic changes, with significant consequences to infrastructure, agriculture, water security, ecology and geomorphology worldwide. However, in a global setting, these changes in fluvial fluxes and their spatial and temporal characteristics are poorly constrained, due to the limited availability of continuous and long-term observations. We present results from a new global-scale particulate modeling framework (WBMsedNEWS) that combines the Global NEWS watershed nutrient export model with the spatially distributed WBMsed water and sediment model. We compare the model predictions against multiple observational datasets. The results indicate that the model is able to accurately predict particulate nutrient (Nitrogen, Phosphorus and Organic Carbon) fluxes on an annual time scale. Analysis of intra-basin nutrient dynamics and fluxes to global oceans is presented.
The Use of Particle/Substrate Material Models in Simulation of Cold-Gas Dynamic-Spray Process
NASA Astrophysics Data System (ADS)
Rahmati, Saeed; Ghaei, Abbas
2014-02-01
Cold spray is a coating deposition method in which the solid particles are accelerated to the substrate using a low temperature supersonic gas flow. Many numerical studies have been carried out in the literature in order to study this process in more depth. Despite the inability of Johnson-Cook plasticity model in prediction of material behavior at high strain rates, it is the model that has been frequently used in simulation of cold spray. Therefore, this research was devoted to compare the performance of different material models in the simulation of cold spray process. Six different material models, appropriate for high strain-rate plasticity, were employed in finite element simulation of cold spray process for copper. The results showed that the material model had a considerable effect on the predicted deformed shapes.
Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza
2016-09-01
The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.
A Gaussian Processes Technique for Short-term Load Forecasting with Considerations of Uncertainty
NASA Astrophysics Data System (ADS)
Ohmi, Masataro; Mori, Hiroyuki
In this paper, an efficient method is proposed to deal with short-term load forecasting with the Gaussian Processes. Short-term load forecasting plays a key role to smooth power system operation such as economic load dispatching, unit commitment, etc. Recently, the deregulated and competitive power market increases the degree of uncertainty. As a result, it is more important to obtain better prediction results to save the cost. One of the most important aspects is that power system operator needs the upper and lower bounds of the predicted load to deal with the uncertainty while they require more accurate predicted values. The proposed method is based on the Bayes model in which output is expressed in a distribution rather than a point. To realize the model efficiently, this paper proposes the Gaussian Processes that consists of the Bayes linear model and kernel machine to obtain the distribution of the predicted value. The proposed method is successively applied to real data of daily maximum load forecasting.
Prediction of Human Cytochrome P450 Inhibition Using a Multitask Deep Autoencoder Neural Network.
Li, Xiang; Xu, Youjun; Lai, Luhua; Pei, Jianfeng
2018-05-30
Adverse side effects of drug-drug interactions induced by human cytochrome P450 (CYP450) inhibition is an important consideration in drug discovery. It is highly desirable to develop computational models that can predict the inhibitive effect of a compound against a specific CYP450 isoform. In this study, we developed a multitask model for concurrent inhibition prediction of five major CYP450 isoforms, namely, 1A2, 2C9, 2C19, 2D6, and 3A4. The model was built by training a multitask autoencoder deep neural network (DNN) on a large dataset containing more than 13 000 compounds, extracted from the PubChem BioAssay Database. We demonstrate that the multitask model gave better prediction results than that of single-task models, previous reported classifiers, and traditional machine learning methods on an average of five prediction tasks. Our multitask DNN model gave average prediction accuracies of 86.4% for the 10-fold cross-validation and 88.7% for the external test datasets. In addition, we built linear regression models to quantify how the other tasks contributed to the prediction difference of a given task between single-task and multitask models, and we explained under what conditions the multitask model will outperform the single-task model, which suggested how to use multitask DNN models more effectively. We applied sensitivity analysis to extract useful knowledge about CYP450 inhibition, which may shed light on the structural features of these isoforms and give hints about how to avoid side effects during drug development. Our models are freely available at http://repharma.pku.edu.cn/deepcyp/home.php or http://www.pkumdl.cn/deepcyp/home.php .
NASA Astrophysics Data System (ADS)
Paz, Shlomit; Goldstein, Pavel; Kordova-Biezuner, Levana; Adler, Lea
2017-04-01
Exposure to benzene has been associated with multiple severe impacts on health. This notwithstanding, at most monitoring stations, benzene is not monitored on a regular basis. The aims of the study were to compare benzene rates in different urban environments (region with heavy traffic and industrial region), to analyse the relationship between benzene and meteorological parameters in a Mediterranean climate type, to estimate the linkages between benzene and NOx and to suggest a prediction model for benzene rates based on NOx levels in order contribute to a better estimation of benzene. Data were used from two different monitoring stations, located on the eastern Mediterranean coast: 1) a traffic monitoring station in Tel Aviv, Israel (TLV) located in an urban region with heavy traffic; 2) a general air quality monitoring station in Haifa Bay (HIB), located in Israel's main industrial region. At each station, hourly, daily, monthly, seasonal, and annual data of benzene, NOx, mean temperature, relative humidity, inversion level, and temperature gradient were analysed over three years: 2008, 2009, and 2010. A prediction model for benzene rates based on NOx levels (which are monitored regularly) was developed to contribute to a better estimation of benzene. The severity of benzene pollution was found to be considerably higher at the traffic monitoring station (TLV) than at the general air quality station (HIB), despite the location of the latter in an industrial area. Hourly, daily, monthly, seasonal, and annual patterns have been shown to coincide with anthropogenic activities (traffic), the day of the week, and atmospheric conditions. A strong correlation between NOx and benzene allowed the development of a prediction model for benzene rates, based on NOx, the day of the week, and the month. The model succeeded in predicting the benzene values throughout the year (except for September). The severity of benzene pollution was found to be considerably higher at the traffic station (TLV) than at the general air quality station (HIB), despite being located in an industrial area. Hourly, daily, seasonal, and annual patterns of benzene rates have been shown to coincide with anthropogenic activities (traffic), day of the week, and atmospheric conditions. A prediction model for benzene rates was developed, based on NOx, the day of the week, and the month. The model suggested in this study might be useful for identifying potential risk of benzene in other urban environments.
Does more mean less? The value of information for conservation planning under sea level rise.
Runting, Rebecca K; Wilson, Kerrie A; Rhodes, Jonathan R
2013-02-01
Many studies have explored the benefits of adopting more sophisticated modelling techniques or spatial data in terms of our ability to accurately predict ecosystem responses to global change. However, we currently know little about whether the improved predictions will actually lead to better conservation outcomes once the costs of gaining improved models or data are accounted for. This severely limits our ability to make strategic decisions for adaptation to global pressures, particularly in landscapes subject to dynamic change such as the coastal zone. In such landscapes, the global phenomenon of sea level rise is a critical consideration for preserving biodiversity. Here, we address this issue in the context of making decisions about where to locate a reserve system to preserve coastal biodiversity with a limited budget. Specifically, we determined the cost-effectiveness of investing in high-resolution elevation data and process-based models for predicting wetland shifts in a coastal region of South East Queensland, Australia. We evaluated the resulting priority areas for reserve selection to quantify the cost-effectiveness of investment in better quantifying biological and physical processes. We show that, in this case, it is considerably more cost effective to use a process-based model and high-resolution elevation data, even if this requires a substantial proportion of the project budget to be expended (up to 99% in one instance). The less accurate model and data set failed to identify areas of high conservation value, reducing the cost-effectiveness of the resultant conservation plan. This suggests that when developing conservation plans in areas where sea level rise threatens biodiversity, investing in high-resolution elevation data and process-based models to predict shifts in coastal ecosystems may be highly cost effective. A future research priority is to determine how this cost-effectiveness varies among different regions across the globe. © 2012 Blackwell Publishing Ltd.
Computational modeling of GTA (gas tungsten arc) welding with emphasis on surface tension effects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zacharia, T.; David, S.A.
1990-01-01
A computational study of the convective heat transfer in the weld pool during gas tungsten arch (GTA) welding of Type 304 stainless steel is presented. The solution of the transport equations is based on a control volume approach which utilized directly, the integral form of the governing equations. The computational model considers buoyancy and electromagnetic and surface tension forces in the solution of convective heat transfer in the weld pool. In addition, the model treats the weld pool surface as a deformable free surface. The computational model includes weld metal vaporization and temperature dependent thermophysical properties. The results indicate thatmore » consideration of weld pool vaporization effects and temperature dependent thermophysical properties significantly influence the weld model predictions. Theoretical predictions of the weld pool surface temperature distributions and the cross-sectional weld pool size and shape wee compared with corresponding experimental measurements. Comparison of the theoretically predicted and the experimentally obtained surface temperature profiles indicated agreement with {plus minus} 8%. The predicted weld cross-section profiles were found to agree very well with actual weld cross-sections for the best theoretical models. 26 refs., 8 figs.« less
Burden, Natalie; Maynard, Samuel K; Weltje, Lennart; Wheeler, James R
2016-10-01
The European Plant Protection Products Regulation 1107/2009 requires that registrants establish whether pesticide metabolites pose a risk to the environment. Fish acute toxicity assessments may be carried out to this end. Considering the total number of pesticide (re-) registrations, the number of metabolites can be considerable, and therefore this testing could use many vertebrates. EFSA's recent "Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters" outlines opportunities to apply non-testing methods, such as Quantitative Structure Activity Relationship (QSAR) models. However, a scientific evidence base is necessary to support the use of QSARs in predicting acute fish toxicity of pesticide metabolites. Widespread application and subsequent regulatory acceptance of such an approach would reduce the numbers of animals used. The work presented here intends to provide this evidence base, by means of retrospective data analysis. Experimental fish LC50 values for 150 metabolites were extracted from the Pesticide Properties Database (http://sitem.herts.ac.uk/aeru/ppdb/en/atoz.htm). QSAR calculations were performed to predict fish acute toxicity values for these metabolites using the US EPA's ECOSAR software. The most conservative predicted LC50 values generated by ECOSAR were compared with experimental LC50 values. There was a significant correlation between predicted and experimental fish LC50 values (Spearman rs = 0.6304, p < 0.0001). For 62% of metabolites assessed, the QSAR predicted values are equal to or lower than their respective experimental values. Refined analysis, taking into account data quality and experimental variation considerations increases the proportion of sufficiently predictive estimates to 91%. For eight of the nine outliers, there are plausible explanation(s) for the disparity between measured and predicted LC50 values. Following detailed consideration of the robustness of this non-testing approach, it can be concluded there is a strong data driven rationale for the applicability of QSAR models in the metabolite assessment scheme recommended by EFSA. As such there is value in further refining this approach, to improve the method and enable its future incorporation into regulatory guidance and practice. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
A Model of Effective Teaching in Arts, Humanities, and Social Sciences
ERIC Educational Resources Information Center
Tahir, Khazima; Ikram, Hamid; Economos, Jennifer; Morote, Elsa-Sophia; Inserra, Albert
2017-01-01
The purpose of this study was to examine how graduate students with undergraduate majors in arts, humanities, and social sciences perceived individualized consideration, Student-Professor Engagement in Learning (SPEL), intellectual stimulation, and student deep learning, and how these variables predict effective teaching. A sample of 251 graduate…
Methodological Considerations in the Study of Earthworms in Forest Ecosystems
Dylan Rhea-Fournier; Grizelle Gonzalez
2017-01-01
Decades of studies have shown that soil macrofauna, especially earthworms, play dominant engineering roles in soils, affecting physical, chemical, and biological components of ecosystems. Quantifying these effects would allow crucial improvement in biogeochemical budgets and modeling, predicting response of land use and disturbance, and could be applied to...
Intrinsic to the myriad of nano-enabled products are atomic-size multifunctional engineered nanomaterials, which upon release contaminate the environments, raising considerable health and safety concerns. Despite global research efforts, mechanism underlying nanotoxicity has rema...
A-Priori Tuning of Modified Magnussen Combustion Model
NASA Technical Reports Server (NTRS)
Norris, A. T.
2016-01-01
In the application of CFD to turbulent reacting flows, one of the main limitations to predictive accuracy is the chemistry model. Using a full or skeletal kinetics model may provide good predictive ability, however, at considerable computational cost. Adding the ability to account for the interaction between turbulence and chemistry improves the overall fidelity of a simulation but adds to this cost. An alternative is the use of simple models, such as the Magnussen model, which has negligible computational overhead, but lacks general predictive ability except for cases that can be tuned to the flow being solved. In this paper, a technique will be described that allows the tuning of the Magnussen model for an arbitrary fuel and flow geometry without the need to have experimental data for that particular case. The tuning is based on comparing the results of the Magnussen model and full finite-rate chemistry when applied to perfectly and partially stirred reactor simulations. In addition, a modification to the Magnussen model is proposed that allows the upper kinetic limit for the reaction rate to be set, giving better physical agreement with full kinetic mechanisms. This procedure allows a simple reacting model to be used in a predictive manner, and affords significant savings in computational costs for simulations.
Stratigraphy and structure of coalbed methane reservoirs in the United States: an overview
Pashin, J.C.
1998-01-01
Stratigraphy and geologic structure determine the shape, continuity and permeability of coal and are therefore critical considerations for designing exploration and production strategies for coalbed methane. Coal in the United states is dominantly of Pennsylvanian, Cretaceous and Tertiary age, and to date, more than 90% of the coalbed methane produced is from Pennsylvanian and cretaceous strata of the Black Warrior and San Juan Basins. Investigations of these basins establish that sequence stratigraphy is a promising approach for regional characterization of coalbed methane reservoirs. Local stratigraphic variation within these strata is the product of sedimentologic and tectonic processes and is a consideration for selecting completion zones. Coalbed methane production in the United States is mainly from foreland and intermontane basins containing diverse compression and extensional structures. Balanced structural models can be used to construct and validate cross sections as well as to quantify layer-parallel strain and predict the distribution of fractures. Folds and faults influence gas and water production in diverse ways. However, interwell heterogeneity related to fractures and shear structures makes the performance of individual wells difficult to predict.Stratigraphy and geologic structure determine the shape, continuity and permeability of coal and are therefore critical considerations for designing exploration and production strategies for coalbed methane. Coal in the United States is dominantly of Pennsylvanian, Cretaceous and Tertiary age, and to date, more than 90% of the coalbed methane produced is from Pennsylvanian and Cretaceous strata of the Black Warrior and San Juan Basins. Investigations of these basins establish that sequence stratigraphy is a promising approach for regional characterization of coalbed methane reservoirs. Local stratigraphic variation within these strata is the product of sedimentologic and tectonic processes and is a consideration for selecting completion zones. Coalbed methane production in the United States is mainly from foreland and intermontane basins containing diverse compressional and extensional structures. Balanced structural models can be used to construct and validate cross sections as well as to quantify layer-parallel strain and predict the distribution of fractures. Folds and faults influence gas and water production in diverse ways. However, interwell heterogeneity related to fractures and shear structures makes the performance of individual wells difficult to predict.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia.
Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.
Development and evaluation of height diameter at breast models for native Chinese Metasequoia
Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li
2017-01-01
Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600
Small-Caliber Projectile Target Impact Angle Determined From Close Proximity Radiographs
2006-10-01
discrete motion data that can be numerically modeled using linear aerodynamic theory or 6-degrees-of- freedom equations of motion. The values of Fφ...Prediction Excel® Spreadsheet shown in figure 9. The Gamma at Impact Spreadsheet uses the linear aerodynamics model , equations 5 and 6, to calculate αT...trajectory angle error via consideration of the RMS fit errors of the actual firings. However, the linear aerodynamics model does not include this effect
Fluid-structure interaction in abdominal aortic aneurysms: Structural and geometrical considerations
NASA Astrophysics Data System (ADS)
Mesri, Yaser; Niazmand, Hamid; Deyranlou, Amin; Sadeghi, Mahmood Reza
2015-08-01
Rupture of the abdominal aortic aneurysm (AAA) is the result of the relatively complex interaction of blood hemodynamics and material behavior of arterial walls. In the present study, the cumulative effects of physiological parameters such as the directional growth, arterial wall properties (isotropy and anisotropy), iliac bifurcation and arterial wall thickness on prediction of wall stress in fully coupled fluid-structure interaction (FSI) analysis of five idealized AAA models have been investigated. In particular, the numerical model considers the heterogeneity of arterial wall and the iliac bifurcation, which allows the study of the geometric asymmetry due to the growth of the aneurysm into different directions. Results demonstrate that the blood pulsatile nature is responsible for emerging a time-dependent recirculation zone inside the aneurysm, which directly affects the stress distribution in aneurismal wall. Therefore, aneurysm deviation from the arterial axis, especially, in the lateral direction increases the wall stress in a relatively nonlinear fashion. Among the models analyzed in this investigation, the anisotropic material model that considers the wall thickness variations, greatly affects the wall stress values, while the stress distributions are less affected as compared to the uniform wall thickness models. In this regard, it is confirmed that wall stress predictions are more influenced by the appropriate structural model than the geometrical considerations such as the level of asymmetry and its curvature, growth direction and its extent.
Kamthania, Mohit; Sharma, D K
2015-12-01
Identification of Nipah virus (NiV) T-cell-specific antigen is urgently needed for appropriate diagnostic and vaccination. In the present study, prediction and modeling of T-cell epitopes of Nipah virus antigenic proteins nucleocapsid, phosphoprotein, matrix, fusion, glycoprotein, L protein, W protein, V protein and C protein followed by the binding simulation studies of predicted highest binding scorers with their corresponding MHC class I alleles were done. Immunoinformatic tool ProPred1 was used to predict the promiscuous MHC class I epitopes of viral antigenic proteins. The molecular modelings of the epitopes were done by PEPstr server. And alleles structure were predicted by MODELLER 9.10. Molecular dynamics (MD) simulation studies were performed through the NAMD graphical user interface embedded in visual molecular dynamics. Epitopes VPATNSPEL, NPTAVPFTL and LLFVFGPNL of Nucleocapsid, V protein and Fusion protein have considerable binding energy and score with HLA-B7, HLA-B*2705 and HLA-A2MHC class I allele, respectively. These three predicted peptides are highly potential to induce T-cell-mediated immune response and are expected to be useful in designing epitope-based vaccines against Nipah virus after further testing by wet laboratory studies.
Reef-coral refugia in a rapidly changing ocean.
Cacciapaglia, Chris; van Woesik, Robert
2015-06-01
This study sought to identify climate-change thermal-stress refugia for reef corals in the Indian and Pacific Oceans. A species distribution modeling approach was used to identify refugia for 12 coral species that differed considerably in their local response to thermal stress. We hypothesized that the local response of coral species to thermal stress might be similarly reflected as a regional response to climate change. We assessed the contemporary geographic range of each species and determined their temperature and irradiance preferences using a k-fold algorithm to randomly select training and evaluation sites. That information was applied to downscaled outputs of global climate models to predict where each species is likely to exist by the year 2100. Our model was run with and without a 1°C capacity to adapt to the rising ocean temperature. The results show a positive exponential relationship between the current area of habitat that coral species occupy and the predicted area of habitat that they will occupy by 2100. There was considerable decoupling between scales of response, however, and with further ocean warming some 'winners' at local scales will likely become 'losers' at regional scales. We predicted that nine of the 12 species examined will lose 24-50% of their current habitat. Most reductions are predicted to occur between the latitudes 5-15°, in both hemispheres. Yet when we modeled a 1°C capacity to adapt, two ubiquitous species, Acropora hyacinthus and Acropora digitifera, were predicted to retain much of their current habitat. By contrast, the thermally tolerant Porites lobata is expected to increase its current distribution by 14%, particularly southward along the east and west coasts of Australia. Five areas were identified as Indian Ocean refugia, and seven areas were identified as Pacific Ocean refugia for reef corals under climate change. All 12 of these reef-coral refugia deserve high-conservation status. © 2015 John Wiley & Sons Ltd.
Uncertainty analysis of a groundwater flow model in east-central Florida
Sepúlveda, Nicasio; Doherty, John E.
2014-01-01
A groundwater flow model for east-central Florida has been developed to help water-resource managers assess the impact of increased groundwater withdrawals from the Floridan aquifer system on heads and spring flows originating from the Upper Floridan aquifer. The model provides a probabilistic description of predictions of interest to water-resource managers, given the uncertainty associated with system heterogeneity, the large number of input parameters, and a nonunique groundwater flow solution. The uncertainty associated with these predictions can then be considered in decisions with which the model has been designed to assist. The “Null Space Monte Carlo” method is a stochastic probabilistic approach used to generate a suite of several hundred parameter field realizations, each maintaining the model in a calibrated state, and each considered to be hydrogeologically plausible. The results presented herein indicate that the model’s capacity to predict changes in heads or spring flows that originate from increased groundwater withdrawals is considerably greater than its capacity to predict the absolute magnitudes of heads or spring flows. Furthermore, the capacity of the model to make predictions that are similar in location and in type to those in the calibration dataset exceeds its capacity to make predictions of different types at different locations. The quantification of these outcomes allows defensible use of the modeling process in support of future water-resources decisions. The model allows the decision-making process to recognize the uncertainties, and the spatial/temporal variability of uncertainties that are associated with predictions of future system behavior in a complex hydrogeological context.
The effects of geometric uncertainties on computational modelling of knee biomechanics
Fisher, John; Wilcox, Ruth
2017-01-01
The geometry of the articular components of the knee is an important factor in predicting joint mechanics in computational models. There are a number of uncertainties in the definition of the geometry of cartilage and meniscus, and evaluating the effects of these uncertainties is fundamental to understanding the level of reliability of the models. In this study, the sensitivity of knee mechanics to geometric uncertainties was investigated by comparing polynomial-based and image-based knee models and varying the size of meniscus. The results suggested that the geometric uncertainties in cartilage and meniscus resulting from the resolution of MRI and the accuracy of segmentation caused considerable effects on the predicted knee mechanics. Moreover, even if the mathematical geometric descriptors can be very close to the imaged-based articular surfaces, the detailed contact pressure distribution produced by the mathematical geometric descriptors was not the same as that of the image-based model. However, the trends predicted by the models based on mathematical geometric descriptors were similar to those of the imaged-based models. PMID:28879008
Xu, Dong; Zhang, Jian; Roy, Ambrish; Zhang, Yang
2011-01-01
I-TASSER is an automated pipeline for protein tertiary structure prediction using multiple threading alignments and iterative structure assembly simulations. In CASP9 experiments, two new algorithms, QUARK and FG-MD, were added to the I-TASSER pipeline for improving the structural modeling accuracy. QUARK is a de novo structure prediction algorithm used for structure modeling of proteins that lack detectable template structures. For distantly homologous targets, QUARK models are found useful as a reference structure for selecting good threading alignments and guiding the I-TASSER structure assembly simulations. FG-MD is an atomic-level structural refinement program that uses structural fragments collected from the PDB structures to guide molecular dynamics simulation and improve the local structure of predicted model, including hydrogen-bonding networks, torsion angles and steric clashes. Despite considerable progress in both the template-based and template-free structure modeling, significant improvements on protein target classification, domain parsing, model selection, and ab initio folding of beta-proteins are still needed to further improve the I-TASSER pipeline. PMID:22069036
Description of a Generalized Analytical Model for the Micro-dosimeter Response
NASA Technical Reports Server (NTRS)
Badavi, Francis F.; Stewart-Sloan, Charlotte R.; Xapsos, Michael A.; Shinn, Judy L.; Wilson, John W.; Hunter, Abigail
2007-01-01
An analytical prediction capability for space radiation in Low Earth Orbit (LEO), correlated with the Space Transportation System (STS) Shuttle Tissue Equivalent Proportional Counter (TEPC) measurements, is presented. The model takes into consideration the energy loss straggling and chord length distribution of the TEPC detector, and is capable of predicting energy deposition fluctuations in a micro-volume by incoming ions through both direct and indirect ionic events. The charged particle transport calculations correlated with STS 56, 51, 110 and 114 flights are accomplished by utilizing the most recent version (2005) of the Langley Research Center (LaRC) deterministic ionized particle transport code High charge (Z) and Energy TRaNsport WZETRN), which has been extensively validated with laboratory beam measurements and available space flight data. The agreement between the TEPC model prediction (response function) and the TEPC measured differential and integral spectra in lineal energy (y) domain is promising.
Cheung, Y M; Leung, W M; Xu, L
1997-01-01
We propose a prediction model called Rival Penalized Competitive Learning (RPCL) and Combined Linear Predictor method (CLP), which involves a set of local linear predictors such that a prediction is made by the combination of some activated predictors through a gating network (Xu et al., 1994). Furthermore, we present its improved variant named Adaptive RPCL-CLP that includes an adaptive learning mechanism as well as a data pre-and-post processing scheme. We compare them with some existing models by demonstrating their performance on two real-world financial time series--a China stock price and an exchange-rate series of US Dollar (USD) versus Deutschmark (DEM). Experiments have shown that Adaptive RPCL-CLP not only outperforms the other approaches with the smallest prediction error and training costs, but also brings in considerable high profits in the trading simulation of foreign exchange market.
The role of thermal and lubricant boundary layers in the transient thermal analysis of spur gears
NASA Technical Reports Server (NTRS)
El-Bayoumy, L. E.; Akin, L. S.; Townsend, D. P.; Choy, F. C.
1989-01-01
An improved convection heat-transfer model has been developed for the prediction of the transient tooth surface temperature of spur gears. The dissipative quality of the lubricating fluid is shown to be limited to the capacity extent of the thermal boundary layer. This phenomenon can be of significance in the determination of the thermal limit of gears accelerating to the point where gear scoring occurs. Steady-state temperature prediction is improved considerably through the use of a variable integration time step that substantially reduces computer time. Computer-generated plots of temperature contours enable the user to animate the propagation of the thermal wave as the gears come into and out of contact, thus contributing to better understanding of this complex problem. This model has a much better capability at predicting gear-tooth temperatures than previous models.
NASA Astrophysics Data System (ADS)
Cleves, Ann E.; Jain, Ajay N.
2008-03-01
Inductive bias is the set of assumptions that a person or procedure makes in making a prediction based on data. Different methods for ligand-based predictive modeling have different inductive biases, with a particularly sharp contrast between 2D and 3D similarity methods. A unique aspect of ligand design is that the data that exist to test methodology have been largely man-made, and that this process of design involves prediction. By analyzing the molecular similarities of known drugs, we show that the inductive bias of the historic drug discovery process has a very strong 2D bias. In studying the performance of ligand-based modeling methods, it is critical to account for this issue in dataset preparation, use of computational controls, and in the interpretation of results. We propose specific strategies to explicitly address the problems posed by inductive bias considerations.
Ability of crime, demographic and business data to forecast areas of increased violence.
Bowen, Daniel A; Mercer Kollar, Laura M; Wu, Daniel T; Fraser, David A; Flood, Charles E; Moore, Jasmine C; Mays, Elizabeth W; Sumner, Steven A
2018-05-24
Identifying geographic areas and time periods of increased violence is of considerable importance in prevention planning. This study compared the performance of multiple data sources to prospectively forecast areas of increased interpersonal violence. We used 2011-2014 data from a large metropolitan county on interpersonal violence (homicide, assault, rape and robbery) and forecasted violence at the level of census block-groups and over a one-month moving time window. Inputs to a Random Forest model included historical crime records from the police department, demographic data from the US Census Bureau, and administrative data on licensed businesses. Among 279 block groups, a model utilizing all data sources was found to prospectively improve the identification of the top 5% most violent block-group months (positive predictive value = 52.1%; negative predictive value = 97.5%; sensitivity = 43.4%; specificity = 98.2%). Predictive modelling with simple inputs can help communities more efficiently focus violence prevention resources geographically.
Predicting the natural flow regime: Models for assessing hydrological alteration in streams
Carlisle, D.M.; Falcone, J.; Wolock, D.M.; Meador, M.R.; Norris, R.H.
2009-01-01
Understanding the extent to which natural streamflow characteristics have been altered is an important consideration for ecological assessments of streams. Assessing hydrologic condition requires that we quantify the attributes of the flow regime that would be expected in the absence of anthropogenic modifications. The objective of this study was to evaluate whether selected streamflow characteristics could be predicted at regional and national scales using geospatial data. Long-term, gaged river basins distributed throughout the contiguous US that had streamflow characteristics representing least disturbed or near pristine conditions were identified. Thirteen metrics of the magnitude, frequency, duration, timing and rate of change of streamflow were calculated using a 20-50 year period of record for each site. We used random forests (RF), a robust statistical modelling approach, to develop models that predicted the value for each streamflow metric using natural watershed characteristics. We compared the performance (i.e. bias and precision) of national- and regional-scale predictive models to that of models based on landscape classifications, including major river basins, ecoregions and hydrologic landscape regions (HLR). For all hydrologic metrics, landscape stratification models produced estimates that were less biased and more precise than a null model that accounted for no natural variability. Predictive models at the national and regional scale performed equally well, and substantially improved predictions of all hydrologic metrics relative to landscape stratification models. Prediction error rates ranged from 15 to 40%, but were 25% for most metrics. We selected three gaged, non-reference sites to illustrate how predictive models could be used to assess hydrologic condition. These examples show how the models accurately estimate predisturbance conditions and are sensitive to changes in streamflow variability associated with long-term land-use change. We also demonstrate how the models can be applied to predict expected natural flow characteristics at ungaged sites. ?? 2009 John Wiley & Sons, Ltd.
Model distribution of Silver Chub (Macrhybopsis storeriana) in western Lake Erie
McKenna, James E.; Castiglione, Chris
2014-01-01
Silver Chub (Macrhybopsis storeriana) was once a common forage fish in Lake Erie but has declined greatly since the 1950s. Identification of optimal and marginal habitats would help conserve and manage this species. We developed neural networks to use broad-scale habitat variables to predict abundance classes of Silver Chub in western Lake Erie, where its largest remaining population exists. Model performance was good, particularly for predicting locations of habitat with the potential to support the highest and lowest abundances of this species. Highest abundances are expected in waters >5 m deep; water depth and distance to coastal habitats were important model features. These models provide initial tools to help conserve this species, but their resolution can be improved with additional data and consideration of other ecological factors.
Modelling personality, plasticity and predictability in shelter dogs
2017-01-01
Behavioural assessments of shelter dogs (Canis lupus familiaris) typically comprise standardized test batteries conducted at one time point, but test batteries have shown inconsistent predictive validity. Longitudinal behavioural assessments offer an alternative. We modelled longitudinal observational data on shelter dog behaviour using the framework of behavioural reaction norms, partitioning variance into personality (i.e. inter-individual differences in behaviour), plasticity (i.e. inter-individual differences in average behaviour) and predictability (i.e. individual differences in residual intra-individual variation). We analysed data on interactions of 3263 dogs (n = 19 281) with unfamiliar people during their first month after arrival at the shelter. Accounting for personality, plasticity (linear and quadratic trends) and predictability improved the predictive accuracy of the analyses compared to models quantifying personality and/or plasticity only. While dogs were, on average, highly sociable with unfamiliar people and sociability increased over days since arrival, group averages were unrepresentative of all dogs and predictions made at the individual level entailed considerable uncertainty. Effects of demographic variables (e.g. age) on personality, plasticity and predictability were observed. Behavioural repeatability was higher one week after arrival compared to arrival day. Our results highlight the value of longitudinal assessments on shelter dogs and identify measures that could improve the predictive validity of behavioural assessments in shelters. PMID:28989764
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirazi, M.A.; Davis, L.R.
To obtain improved prediction of heated plume characteristics from a surface jet, an integral analysis computer model was modified and a comprehensive set of field and laboratory data available from the literature was gathered, analyzed, and correlated for estimating the magnitude of certain coefficients that are normally introduced in these analyses to achieve closure. The parameters so estimated include the coefficients for entrainment, turbulent exchange, drag, and shear. Since there appeared considerable scatter in the data, even after appropriate subgrouping to narrow the influence of various flow conditions on the data, only statistical procedures could be applied to find themore » best fit. This and other analyses of its type have been widely used in industry and government for the prediction of thermal plumes from steam power plants. Although the present model has many shortcomings, a recent independent and exhaustive assessment of such predictions revealed that in comparison with other analyses of its type the present analysis predicts the field situations more successfully.« less
Continental Asymmetry in Climate-Induced Tropical Drought: Driving Mechanisms and Ecosystem Response
NASA Astrophysics Data System (ADS)
Randerson, J. T.; Swann, A. L. S.; Koven, C. D.; Hoffman, F. M.; Chen, Y.
2015-12-01
Current theory does not adequately explain diverging patterns of future drought stress predicted by Earth system models (ESMs) across tropical South America, Africa, and equatorial Asia. By 2100 for the Representative Concentration Pathway 8.5 (RCP8.5) many models predict significant decreases in precipitation across northeastern South America and Central America. In contrast, most models predict increasing levels of precipitation across tropical Africa and equatorial Asia. Using the Community Earth System Model v1.0 with RCP8.5 simulations to 2300, we found that this longitudinal precipitation asymmetry intensified over time and as a consequence, terrestrial carbon losses from the neotropics were considerably higher than those in Africa and Asia. Carbon losses in some areas of the Amazon in a fully coupled simulation exceeded 15 kg C per m2 by 2300, relative to estimates from a biogeochemically-forced simulation in which atmospheric carbon dioxide and other greenhouse gases did not influence the atmospheric radiation budget. Variations in the amount of neotropical drying varied considerably among CMIP5 ESMs, and we used several types of analysis to identify driving mechanisms and to reduce uncertainties associated with these projections. CMIP5 models in general underestimated North Atlantic sea surface temperatures and the strength of the Atlantic meridional overturning circulation (AMOC). Models that more accurately simulated North Atlantic SSTs during the historical era had smaller mean precipitation biases and predicted greater neotropical forest drying than other models. This suggests that future drought stress in northern South America and Central America may be larger than estimates derived from the multi-model mean. Analysis of idealized radiatively coupled, biogeochemically coupled and fully coupled CMIP5 model simulations indicated that the direct effects of atmospheric carbon dioxide on plant physiology also was an important factor driving asymmetric precipitation change across the tropics, and had a similar pattern as changes induced solely from greenhouse gas effects on atmospheric radiation. We conclude by discussing the implications of the continental drought asymmetry for the vulnerability of tropical forests to fire, agriculture, and tree mortality.
Influence of the pressure dependent coefficient of friction on deep drawing springback predictions
NASA Astrophysics Data System (ADS)
Gil, Imanol; Galdos, Lander; Mendiguren, Joseba; Mugarra, Endika; Sáenz de Argandoña, Eneko
2016-10-01
This research studies the effect of considering an advanced variable friction coefficient on the springback prediction of stamping processes. Traditional constant coefficient of friction considerations are being replaced by more advanced friction coefficient definitions. The aim of this work is to show the influence of defining a pressure dependent friction coefficient on numerical springback predictions of a DX54D mild steel, a HSLA380 and a DP780 high strength steel. The pressure dependent friction model of each material was fitted to the experimental data obtained by Strip Drawing tests. Then, these friction models were implemented in a numerical simulation of a drawing process of an industrial automotive part. The results showed important differences between defining a pressure dependent friction coefficient or a constant friction coefficient.
Assessing waveform predictions of recent three-dimensional velocity models of the Tibetan Plateau
NASA Astrophysics Data System (ADS)
Bao, Xueyang; Shen, Yang
2016-04-01
Accurate velocity models are essential for both the determination of earthquake locations and source moments and the interpretation of Earth structures. With the increasing number of three-dimensional velocity models, it has become necessary to assess the models for accuracy in predicting seismic observations. Six models of the crustal and uppermost mantle structures in Tibet and surrounding regions are investigated in this study. Regional Rayleigh and Pn (or Pnl) waveforms from two ground truth events, including one nuclear explosion and one natural earthquake located in the study area, are simulated by using a three-dimensional finite-difference method. Synthetics are compared to observed waveforms in multiple period bands of 20-75 s for Rayleigh waves and 1-20 s for Pn/Pnl waves. The models are evaluated based on the phase delays and cross-correlation coefficients between synthetic and observed waveforms. A model generated from full-wave ambient noise tomography best predicts Rayleigh waves throughout the data set, as well as Pn/Pnl waves traveling from the Tarim Basin to the stations located in central Tibet. In general, the models constructed from P wave tomography are not well suited to predict Rayleigh waves, and vice versa. Possible causes of the differences between observed and synthetic waveforms, and frequency-dependent variations of the "best matching" models with the smallest prediction errors are discussed. This study suggests that simultaneous prediction for body and surface waves requires an integrated velocity model constructed with multiple seismic waveforms and consideration of other important properties, such as anisotropy.
2016-03-31
particular physical model under consideration. Therefore, in the following the enrichment functions are discussed with respect to particular...some domains of influence are extended outside of the physical boundary, the reproducing conditions enforced in Eq. (6) guarantee the order of...often used in astrophysics problems, where many fluid problems are encountered and even “solid" bodies deform under their own gravity. It can also
David, Allan E.; Cole, Adam J.; Chertok, Beata; Park, Yoon Shin; Yang, Victor C.
2011-01-01
Magnetic nanoparticles (MNP) continue to draw considerable attention as potential diagnostic and therapeutic tools in the fight against cancer. Although many interacting forces present themselves during magnetic targeting of MNP to tumors, most theoretical considerations of this process ignore all except for the magnetic and drag forces. Our validation of a simple in vitro model against in vivo data, and subsequent reproduction of the in vitro results with a theoretical model indicated that these two forces do indeed dominate the magnetic capture of MNP. However, because nanoparticles can be subject to aggregation, and large MNP experience an increased magnetic force, the effects of surface forces on MNP stability cannot be ignored. We accounted for the aggregating surface forces simply by measuring the size of MNP retained from flow by magnetic fields, and utilized this size in the mathematical model. This presumably accounted for all particle-particle interactions, including those between magnetic dipoles. Thus, our “corrected” mathematical model provided a reasonable estimate of not only fractional MNP retention, but also predicted the regions of accumulation in a simulated capillary. Furthermore, the model was also utilized to calculate the effects of MNP size and spatial location, relative to the magnet, on targeting of MNPs to tumors. This combination of an in vitro model with a theoretical model could potentially assist with parametric evaluations of magnetic targeting, and enable rapid enhancement and optimization of magnetic targeting methodologies. PMID:21295085
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
A link prediction method for heterogeneous networks based on BP neural network
NASA Astrophysics Data System (ADS)
Li, Ji-chao; Zhao, Dan-ling; Ge, Bing-Feng; Yang, Ke-Wei; Chen, Ying-Wu
2018-04-01
Most real-world systems, composed of different types of objects connected via many interconnections, can be abstracted as various complex heterogeneous networks. Link prediction for heterogeneous networks is of great significance for mining missing links and reconfiguring networks according to observed information, with considerable applications in, for example, friend and location recommendations and disease-gene candidate detection. In this paper, we put forward a novel integrated framework, called MPBP (Meta-Path feature-based BP neural network model), to predict multiple types of links for heterogeneous networks. More specifically, the concept of meta-path is introduced, followed by the extraction of meta-path features for heterogeneous networks. Next, based on the extracted meta-path features, a supervised link prediction model is built with a three-layer BP neural network. Then, the solution algorithm of the proposed link prediction model is put forward to obtain predicted results by iteratively training the network. Last, numerical experiments on the dataset of examples of a gene-disease network and a combat network are conducted to verify the effectiveness and feasibility of the proposed MPBP. It shows that the MPBP with very good performance is superior to the baseline methods.
Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.
2009-01-01
This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.
Towards an Effective Health Interventions Design: An Extension of the Health Belief Model
Orji, Rita; Vassileva, Julita; Mandryk, Regan
2012-01-01
Introduction The recent years have witnessed a continuous increase in lifestyle related health challenges around the world. As a result, researchers and health practitioners have focused on promoting healthy behavior using various behavior change interventions. The designs of most of these interventions are informed by health behavior models and theories adapted from various disciplines. Several health behavior theories have been used to inform health intervention designs, such as the Theory of Planned Behavior, the Transtheoretical Model, and the Health Belief Model (HBM). However, the Health Belief Model (HBM), developed in the 1950s to investigate why people fail to undertake preventive health measures, remains one of the most widely employed theories of health behavior. However, the effectiveness of this model is limited. The first limitation is the low predictive capacity (R2 < 0.21 on average) of existing HBM’s variables coupled with the small effect size of individual variables. The second is lack of clear rules of combination and relationship between the individual variables. In this paper, we propose a solution that aims at addressing these limitations as follows: (1) we extended the Health Belief Model by introducing four new variables: Self-identity, Perceived Importance, Consideration of Future Consequences, and Concern for Appearance as possible determinants of healthy behavior. (2) We exhaustively explored the relationships/interactions between the HBM variables and their effect size. (3) We tested the validity of both our proposed extended model and the original HBM on healthy eating behavior. Finally, we compared the predictive capacity of the original HBM model and our extended model. Methods: To achieve the objective of this paper, we conducted a quantitative study of 576 participants’ eating behavior. Data for this study were collected over a period of one year (from August 2011 to August 2012). The questionnaire consisted of validated scales assessing the HBM determinants – perceived benefit, barrier, susceptibility, severity, cue to action, and self-efficacy – using 7-point Likert scale. We also assessed other health determinants such as consideration of future consequences, self-identity, concern for appearance and perceived importance. To analyses our data, we employed factor analysis and Partial Least Square Structural Equation Model (PLS-SEM) to exhaustively explore the interaction/relationship between the determinants and healthy eating behavior. We tested for the validity of both our proposed extended model and the original HBM on healthy eating behavior. Finally, we compared the predictive capacity of the original HBM model and our extended model and investigated possible mediating effects. Results: The results show that the three newly added determinants are better predictors of healthy behavior. Our extended HBM model lead to approximately 78% increase (from 40 to 71%) in predictive capacity compared to the old model. This shows the suitability of our extended HBM for use in predicting healthy behavior and in informing health intervention design. The results from examining possible relationships between the determinants in our model lead to an interesting discovery of some mediating relationships between the HBM’s determinants, therefore, shedding light on some possible combinations of determinants that could be employed by intervention designers to increase the effectiveness of their design. Conclusion: Consideration of future consequences, self-identity, concern for appearance, perceived importance, self-efficacy, perceived susceptibility are significant determinants of healthy eating behavior that can be manipulated by healthy eating intervention design. Most importantly, the result from our model established the existence of some mediating relationships among the determinants. The knowledge of both the direct and indirect relationships sheds some light on the possible combination rules. PMID:23569653
Abdel-Dayem, M S; Annajar, B B; Hanafi, H A; Obenauer, P J
2012-05-01
The increased cases of cutaneous leishmaniasis vectored by Phlebotomus papatasi (Scopoli) in Libya have driven considerable effort to develop a predictive model for the potential geographical distribution of this disease. We collected adult P. papatasi from 17 sites in Musrata and Yefern regions of Libya using four different attraction traps. Our trap results and literature records describing the distribution of P. papatasi were incorporated into a MaxEnt algorithm prediction model that used 22 environmental variables. The model showed a high performance (AUC = 0.992 and 0.990 for training and test data, respectively). High suitability for P. papatasi was predicted to be largely confined to the coast at altitudes <600 m. Regions south of 300 degrees N latitude were calculated as unsuitable for this species. Jackknife analysis identified precipitation as having the most significant predictive power, while temperature and elevation variables were less influential. The National Leishmaniasis Control Program in Libya may find this information useful in their efforts to control zoonotic cutaneous leishmaniasis. Existing records are strongly biased toward a few geographical regions, and therefore, further sand fly collections are warranted that should include documentation of such factors as soil texture and humidity, land cover, and normalized difference vegetation index (NDVI) data to increase the model's predictive power.
Statistical and dynamical forecast of regional precipitation after mature phase of ENSO
NASA Astrophysics Data System (ADS)
Sohn, S.; Min, Y.; Lee, J.; Tam, C.; Ahn, J.
2010-12-01
While the seasonal predictability of general circulation models (GCMs) has been improved, the current model atmosphere in the mid-latitude does not respond correctly to external forcing such as tropical sea surface temperature (SST), particularly over the East Asia and western North Pacific summer monsoon regions. In addition, the time-scale of prediction scope is considerably limited and the model forecast skill still is very poor beyond two weeks. Although recent studies indicate that coupled model based multi-model ensemble (MME) forecasts show the better performance, the long-lead forecasts exceeding 9 months still show a dramatic decrease of the seasonal predictability. This study aims at diagnosing the dynamical MME forecasts comprised of the state of art 1-tier models as well as comparing them with the statistical model forecasts, focusing on the East Asian summer precipitation predictions after mature phase of ENSO. The lagged impact of El Nino as major climate contributor on the summer monsoon in model environments is also evaluated, in the sense of the conditional probabilities. To evaluate the probability forecast skills, the reliability (attributes) diagram and the relative operating characteristics following the recommendations of the World Meteorological Organization (WMO) Standardized Verification System for Long-Range Forecasts are used in this study. The results should shed light on the prediction skill for dynamical model and also for the statistical model, in forecasting the East Asian summer monsoon rainfall with a long-lead time.
Methods to improve traffic flow and noise exposure estimation on minor roads.
Morley, David W; Gulliver, John
2016-09-01
Address-level estimates of exposure to road traffic noise for epidemiological studies are dependent on obtaining data on annual average daily traffic (AADT) flows that is both accurate and with good geographical coverage. National agencies often have reliable traffic count data for major roads, but for residential areas served by minor roads, especially at national scale, such information is often not available or incomplete. Here we present a method to predict AADT at the national scale for minor roads, using a routing algorithm within a geographical information system (GIS) to rank roads by importance based on simulated journeys through the road network. From a training set of known minor road AADT, routing importance is used to predict AADT on all UK minor roads in a regression model along with the road class, urban or rural location and AADT on the nearest major road. Validation with both independent traffic counts and noise measurements show that this method gives a considerable improvement in noise prediction capability when compared to models that do not give adequate consideration to minor road variability (Spearman's rho. increases from 0.46 to 0.72). This has significance for epidemiological cohort studies attempting to link noise exposure to adverse health outcomes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Coward, L. Andrew; Gedeon, Tamas D.
2016-01-01
Theoretical arguments demonstrate that practical considerations, including the needs to limit physiological resources and to learn without interference with prior learning, severely constrain the anatomical architecture of the brain. These arguments identify the hippocampal system as the change manager for the cortex, with the role of selecting the most appropriate locations for cortical receptive field changes at each point in time and driving those changes. This role results in the hippocampal system recording the identities of groups of cortical receptive fields that changed at the same time. These types of records can also be used to reactivate the receptive fields active during individual unique past events, providing mechanisms for episodic memory retrieval. Our theoretical arguments identify the perirhinal cortex as one important focal point both for driving changes and for recording and retrieving episodic memories. The retrieval of episodic memories must not drive unnecessary receptive field changes, and this consideration places strong constraints on neuron properties and connectivity within and between the perirhinal cortex and regular cortex. Hence the model predicts a number of such properties and connectivity. Experimental test of these falsifiable predictions would clarify how change is managed in the cortex and how episodic memories are retrieved. PMID:26819594
An improved procedure for El Nino forecasting: Implications for predictability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, D.; Zebiak, S.E.; Cane, M.A.
A coupled ocean-atmosphere data assimilation procedure yields improved forecasts of El Nino for the 1980s compared with previous forecasting procedures. As in earlier forecasts with the same model, no oceanic data were used, and only wind information was assimilated. The improvement is attributed to the explicit consideration of air-sea interaction in the initialization. These results suggest that El Nino is more predictable than previously estimated, but that predictability may vary on decadal or longer time scales. This procedure also eliminates the well-known spring barrier to El Nino prediction, which implies that it may not be intrinsic to the real climatemore » system. 24 refs., 5 figs., 1 tab.« less
Ohashi, Hidenori; Tamaki, Takanori; Yamaguchi, Takeo
2011-12-29
Molecular collisions, which are the microscopic origin of molecular diffusive motion, are affected by both the molecular surface area and the distance between molecules. Their product can be regarded as the free space around a penetrant molecule defined as the "shell-like free volume" and can be taken as a characteristic of molecular collisions. On the basis of this notion, a new diffusion theory has been developed. The model can predict molecular diffusivity in polymeric systems using only well-defined single-component parameters of molecular volume, molecular surface area, free volume, and pre-exponential factors. By consideration of the physical description of the model, the actual body moved and which neighbor molecules are collided with are the volume and the surface area of the penetrant molecular core. In the present study, a semiempirical quantum chemical calculation was used to calculate both of these parameters. The model and the newly developed parameters offer fairly good predictive ability. © 2011 American Chemical Society
Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction
Mach, J. C.; Budrow, C. J.; Pagan, D. C.; ...
2017-03-15
Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less
NASA Astrophysics Data System (ADS)
Chattopadhyay, Goutami; Chattopadhyay, Surajit; Chakraborthy, Parthasarathi
2012-07-01
The present study deals with daily total ozone concentration time series over four metro cities of India namely Kolkata, Mumbai, Chennai, and New Delhi in the multivariate environment. Using the Kaiser-Meyer-Olkin measure, it is established that the data set under consideration are suitable for principal component analysis. Subsequently, by introducing rotated component matrix for the principal components, the predictors suitable for generating artificial neural network (ANN) for daily total ozone prediction are identified. The multicollinearity is removed in this way. Models of ANN in the form of multilayer perceptron trained through backpropagation learning are generated for all of the study zones, and the model outcomes are assessed statistically. Measuring various statistics like Pearson correlation coefficients, Willmott's indices, percentage errors of prediction, and mean absolute errors, it is observed that for Mumbai and Kolkata the proposed ANN model generates very good predictions. The results are supported by the linearly distributed coordinates in the scatterplots.
An Integrated Finite Element-based Simulation Framework: From Hole Piercing to Hole Expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Xiaohua; Sun, Xin; Golovashchenko, Segey F.
An integrated finite element-based modeling framework is developed to predict the hole expansion ratio (HER) of AA6111-T4 sheet by considering the piercing-induced damages around the hole edge. Using damage models and parameters calibrated from previously reported tensile stretchability studies, the predicted HER correlates well with experimentally measured HER values for different hole piercing clearances. The hole piercing model shows burrs are not generated on the sheared surface for clearances less than 20%, which corresponds well with the experimental data on pierced holes cross-sections. Finite-element-calculated HER also is not especially sensitive to piercing clearances less than this value. However, as clearancesmore » increase to 30% and further to 40%, the HER values are predicted to be considerably smaller, also consistent with experimental measurements. Upon validation, the integrated modeling framework is used to examine the effects of different hole piercing and hole expansion conditions on the critical HERs for AA6111-T4.« less
Validating a Model for Welding Induced Residual Stress Using High-Energy X-ray Diffraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mach, J. C.; Budrow, C. J.; Pagan, D. C.
Integrated computational materials engineering (ICME) provides a pathway to advance performance in structures through the use of physically-based models to better understand how manufacturing processes influence product performance. As one particular challenge, consider that residual stresses induced in fabrication are pervasive and directly impact the life of structures. For ICME to be an effective strategy, it is essential that predictive capability be developed in conjunction with critical experiments. In the present paper, simulation results from a multi-physics model for gas metal arc welding are evaluated through x-ray diffraction using synchrotron radiation. A test component was designed with intent to developmore » significant gradients in residual stress, be representative of real-world engineering application, yet remain tractable for finely spaced strain measurements with positioning equipment available at synchrotron facilities. Finally, the experimental validation lends confidence to model predictions, facilitating the explicit consideration of residual stress distribution in prediction of fatigue life.« less
Vazquez-Anderson, Jorge; Mihailovic, Mia K; Baldridge, Kevin C; Reyes, Kristofer G; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B; Contreras, Lydia M
2017-05-19
Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA-RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA-RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA-mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
In situ Observations of Heliospheric Current Sheets Evolution
NASA Astrophysics Data System (ADS)
Liu, Yong; Peng, Jun; Huang, Jia; Klecker, Berndt
2017-04-01
We investigate the Heliospheric current sheet observation time difference of the spacecraft using the STEREO, ACE and WIND data. The observations are first compared to a simple theory in which the time difference is only determined by the radial and longitudinal separation between the spacecraft. The predictions fit well with the observations except for a few events. Then the time delay caused by the latitudinal separation is taken in consideration. The latitude of each spacecraft is calculated based on the PFSS model assuming that heliospheric current sheets propagate at the solar wind speed without changing their shapes from the origin to spacecraft near 1AU. However, including the latitudinal effects does not improve the prediction, possibly because that the PFSS model may not locate the current sheets accurately enough. A new latitudinal delay is predicted based on the time delay using the observations on ACE data. The new method improved the prediction on the time lag between spacecraft; however, further study is needed to predict the location of the heliospheric current sheet more accurately.
Life prediction technologies for aeronautical propulsion systems
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.
1990-01-01
Fatigue and fracture problems continue to occur in aeronautical gas turbine engines. Components whose useful life is limited by these failure modes include turbine hot-section blades, vanes, and disks. Safety considerations dictate that catastrophic failures be avoided, while economic considerations dictate that catastrophic failures be avoided, while economic considerations dictate that noncatastrophic failures occur as infrequently as possible. Therefore, the decision in design is making the tradeoff between engine performance and durability. LeRC has contributed to the aeropropulsion industry in the area of life prediction technology for over 30 years, developing creep and fatigue life prediction methodologies for hot-section materials. At the present time, emphasis is being placed on the development of methods capable of handling both thermal and mechanical fatigue under severe environments. Recent accomplishments include the development of more accurate creep-fatigue life prediction methods such as the total strain version of LeRC's strain-range partitioning (SRP) and the HOST-developed cyclic damage accumulation (CDA) model. Other examples include the development of a more accurate cumulative fatigue damage rule - the double damage curve approach (DDCA), which provides greatly improved accuracy in comparison with usual cumulative fatigue design rules. Other accomplishments in the area of high-temperature fatigue crack growth may also be mentioned. Finally, we are looking to the future and are beginning to do research on the advanced methods which will be required for development of advanced materials and propulsion systems over the next 10-20 years.
Soehle, Martin; Wolf, Christina F; Priston, Melanie J; Neuloh, Georg; Bien, Christian G; Hoeft, Andreas; Ellerkmann, Richard K
2015-08-01
Anaesthesia for awake craniotomy aims for an unconscious patient at the beginning and end of surgery but a rapidly awakening and responsive patient during the awake period. Therefore, an accurate pharmacokinetic/pharmacodynamic (PK/PD) model for propofol is required to tailor depth of anaesthesia. To compare the predictive performances of the Marsh and the Schnider PK/PD models during awake craniotomy. A prospective observational study. Single university hospital from February 2009 to May 2010. Twelve patients undergoing elective awake craniotomy for resection of brain tumour or epileptogenic areas. Arterial blood samples were drawn at intervals and the propofol plasma concentration was determined. The prediction error, bias [median prediction error (MDPE)] and inaccuracy [median absolute prediction error (MDAPE)] of the Marsh and the Schnider models were calculated. The secondary endpoint was the prediction probability PK, by which changes in the propofol effect-site concentration (as derived from simultaneous PK/PD modelling) predicted changes in anaesthetic depth (measured by the bispectral index). The Marsh model was associated with a significantly (P = 0.05) higher inaccuracy (MDAPE 28.9 ± 12.0%) than the Schnider model (MDAPE 21.5 ± 7.7%) and tended to reach a higher bias (MDPE Marsh -11.7 ± 14.3%, MDPE Schnider -5.4 ± 20.7%, P = 0.09). MDAPE was outside of accepted limits in six (Marsh model) and two (Schnider model) of 12 patients. The prediction probability was comparable between the Marsh (PK 0.798 ± 0.056) and the Schnider model (PK 0.787 ± 0.055), but after adjusting the models to each individual patient, the Schnider model achieved significantly higher prediction probabilities (PK 0.807 ± 0.056, P = 0.05). When using the 'asleep-awake-asleep' anaesthetic technique during awake craniotomy, we advocate using the PK/PD model proposed by Schnider. Due to considerable interindividual variation, additional monitoring of anaesthetic depth is recommended. ClinicalTrials.gov identifier: NCT 01128465.
Automated Diagnosis Coding with Combined Text Representations.
Berndorfer, Stefan; Henriksson, Aron
2017-01-01
Automated diagnosis coding can be provided efficiently by learning predictive models from historical data; however, discriminating between thousands of codes while allowing a variable number of codes to be assigned is extremely difficult. Here, we explore various text representations and classification models for assigning ICD-9 codes to discharge summaries in MIMIC-III. It is shown that the relative effectiveness of the investigated representations depends on the frequency of the diagnosis code under consideration and that the best performance is obtained by combining models built using different representations.
NASA Astrophysics Data System (ADS)
Burger, Liesl; Forbes, Andrew
2007-09-01
A simple model of a Porro prism laser resonator has been found to correctly predict the formation of the "petal" mode patterns typical of these resonators. A geometrical analysis of the petals suggests that these petals are the lowest-order modes of this type of resonator. Further use of the model reveals the formation of more complex beam patterns, and the nature of these patterns is investigated. Also, the output of stable and unstable resonator modes is presented.
NASA Astrophysics Data System (ADS)
Box, V. G. S.; Evans-Lora, T.
2000-01-01
The molecular modeling program STR3DI.EXE, and its molecular mechanics module, QVBMM, were used to simulate, and evaluate, the stereo-electronic effects in the mono-alkoxides of the 4,6- O-ethylideneglycopyranosides of allose, mannose, galactose and glucose. This study has confirmed the ability of these molecular modeling tools to predict the regiochemistry and reactivity of these sugar derivatives, and holds considerable implications for unraveling the chemistry of the rare monosaccharides.
Mete, Cem
2005-02-01
This paper uses longitudinal survey data from Taiwan to investigate the predictors of elderly mortality. The empirical analysis confirms a relationship between socioeconomic characteristics and mortality, but this relationship weakens considerably when estimates are conditional on the health status at the time of the first wave survey. In terms of predictive power, the models with an activities of daily living index fare better (as opposed to models with self-evaluated health or self-reported illnesses). Having said that there is a payoff to the consideration of self-evaluated health jointly with other 'objective' health indicators. Other findings include a strong association between life satisfaction and survival, which prevails even after controlling for other explanatory variables. Copyright (c) 2004 John Wiley & Sons, Ltd.
Time Factor in the Theory of Anthropogenic Risk Prediction in Complex Dynamic Systems
NASA Astrophysics Data System (ADS)
Ostreikovsky, V. A.; Shevchenko, Ye N.; Yurkov, N. K.; Kochegarov, I. I.; Grishko, A. K.
2018-01-01
The article overviews the anthropogenic risk models that take into consideration the development of different factors in time that influence the complex system. Three classes of mathematical models have been analyzed for the use in assessing the anthropogenic risk of complex dynamic systems. These models take into consideration time factor in determining the prospect of safety change of critical systems. The originality of the study is in the analysis of five time postulates in the theory of anthropogenic risk and the safety of highly important objects. It has to be stressed that the given postulates are still rarely used in practical assessment of equipment service life of critically important systems. That is why, the results of study presented in the article can be used in safety engineering and analysis of critically important complex technical systems.
Two-phase model for prediction of cell-free layer width in blood flow
Namgung, Bumseok; Ju, Meongkeun; Cabrales, Pedro; Kim, Sangho
2014-01-01
This study aimed to develop a numerical model capable of predicting changes in the cell-free layer (CFL) width in narrow tubes with consideration of red blood cell aggregation effects. The model development integrates to empirical relations for relative viscosity (ratio of apparent viscosity to medium viscosity) and core viscosity measured on independent blood samples to create a continuum model that includes these two regions. The constitutive relations were derived from in vitro experiments performed with three different glass-capillary tubes (inner diameter = 30, 50 and 100 μm) over a wide range of pseudoshear rates (5-300 s−1). The aggregation tendency of the blood samples was also varied by adding Dextran 500 kDa. Our model predicted that the CFL width was strongly modulated by the relative viscosity function. Aggregation increased the width of CFL, and this effect became more pronounced at low shear rates. The CFL widths predicted in the present study at high shear conditions were in agreement with those reported in previous studies. However, unlike previous multi-particle models, our model did not require a high computing cost, and it was capable of reproducing results for a thicker CFL width at low shear conditions, depending on aggregating tendency of the blood. PMID:23116701
Singh, Kunwar P; Gupta, Shikha; Rai, Premanjali
2013-09-01
The research aims to develop global modeling tools capable of categorizing structurally diverse chemicals in various toxicity classes according to the EEC and European Community directives, and to predict their acute toxicity in fathead minnow using set of selected molecular descriptors. Accordingly, artificial intelligence approach based classification and regression models, such as probabilistic neural networks (PNN), generalized regression neural networks (GRNN), multilayer perceptron neural network (MLPN), radial basis function neural network (RBFN), support vector machines (SVM), gene expression programming (GEP), and decision tree (DT) were constructed using the experimental toxicity data. Diversity and non-linearity in the chemicals' data were tested using the Tanimoto similarity index and Brock-Dechert-Scheinkman statistics. Predictive and generalization abilities of various models constructed here were compared using several statistical parameters. PNN and GRNN models performed relatively better than MLPN, RBFN, SVM, GEP, and DT. Both in two and four category classifications, PNN yielded a considerably high accuracy of classification in training (95.85 percent and 90.07 percent) and validation data (91.30 percent and 86.96 percent), respectively. GRNN rendered a high correlation between the measured and model predicted -log LC50 values both for the training (0.929) and validation (0.910) data and low prediction errors (RMSE) of 0.52 and 0.49 for two sets. Efficiency of the selected PNN and GRNN models in predicting acute toxicity of new chemicals was adequately validated using external datasets of different fish species (fathead minnow, bluegill, trout, and guppy). The PNN and GRNN models showed good predictive and generalization abilities and can be used as tools for predicting toxicities of structurally diverse chemical compounds. Copyright © 2013 Elsevier Inc. All rights reserved.
The Politics of Scarcity: A Consideration of Futurist Models of Boom and Doom.
ERIC Educational Resources Information Center
Johnston, Barry V.
The works of 20 futurists and their predictions for the year 2000 and beyond are examined according to four perspectives: Malthusianism, Utopianism (based on theories of William Godwin), Marxism, and social structuralism. Futurists may be grouped into one of the categories according to their theories about the interdependent problems of…
Tools to aid post-wildfire assessment and erosion-mitigation treatment decisions
Peter R. Robichaud; Louise E. Ashmun
2013-01-01
A considerable investment in post-fire research over the past decade has improved our understanding of wildfire effects on soil, hydrology, erosion and erosion-mitigation treatment effectiveness. Using this new knowledge, we have developed several tools to assist land managers with post-wildfire assessment and treatment decisions, such as prediction models, research...
Longitudinal Examination of Optimism, Personal Self-Efficacy and Student Well-Being: A Path Analysis
ERIC Educational Resources Information Center
Phan, Huy P.
2016-01-01
The present longitudinal study, based on existing theoretical tenets, explored a conceptual model that depicted four major orientations: optimism, self-efficacy, and academic well-being. An important question for consideration, in this case, involved the testing of different untested trajectories that could explain and predict individuals'…
NASA Astrophysics Data System (ADS)
Perdigão, R. A. P.
2017-12-01
Predictability assessments are traditionally made on a case-by-case basis, often by running the particular model of interest with randomly perturbed initial/boundary conditions and parameters, producing computationally expensive ensembles. These approaches provide a lumped statistical view of uncertainty evolution, without eliciting the fundamental processes and interactions at play in the uncertainty dynamics. In order to address these limitations, we introduce a systematic dynamical framework for predictability assessment and forecast, by analytically deriving governing equations of predictability in terms of the fundamental architecture of dynamical systems, independent of any particular problem under consideration. The framework further relates multiple uncertainty sources along with their coevolutionary interplay, enabling a comprehensive and explicit treatment of uncertainty dynamics along time, without requiring the actual model to be run. In doing so, computational resources are freed and a quick and effective a-priori systematic dynamic evaluation is made of predictability evolution and its challenges, including aspects in the model architecture and intervening variables that may require optimization ahead of initiating any model runs. It further brings out universal dynamic features in the error dynamics elusive to any case specific treatment, ultimately shedding fundamental light on the challenging issue of predictability. The formulated approach, framed with broad mathematical physics generality in mind, is then implemented in dynamic models of nonlinear geophysical systems with various degrees of complexity, in order to evaluate their limitations and provide informed assistance on how to optimize their design and improve their predictability in fundamental dynamical terms.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
A fuzzy mathematical model of West Java population with logistic growth model
NASA Astrophysics Data System (ADS)
Nurkholipah, N. S.; Amarti, Z.; Anggriani, N.; Supriatna, A. K.
2018-03-01
In this paper we develop a mathematics model of population growth in the West Java Province Indonesia. The model takes the form as a logistic differential equation. We parameterize the model using several triples of data, and choose the best triple which has the smallest Mean Absolute Percentage Error (MAPE). The resulting model is able to predict the historical data with a high accuracy and it also able to predict the future of population number. Predicting the future population is among the important factors that affect the consideration is preparing a good management for the population. Several experiment are done to look at the effect of impreciseness in the data. This is done by considering a fuzzy initial value to the crisp model assuming that the model propagates the fuzziness of the independent variable to the dependent variable. We assume here a triangle fuzzy number representing the impreciseness in the data. We found that the fuzziness may disappear in the long-term. Other scenarios also investigated, such as the effect of fuzzy parameters to the crisp initial value of the population. The solution of the model is obtained numerically using the fourth-order Runge-Kutta scheme.
Shreffler, Karina M; Johnson, David R
2013-09-01
Prior research indicates a negative relationship between women's labor force participation and fertility at the individual level in the United States, but little is known about the reasons for this relationship beyond work hours. We employed discrete event history models using panel data from the National Survey of Families and Households ( N = 2,411) and found that the importance of career considerations mediates the work hours/fertility relationship. Further, fertility intentions and the importance of career considerations were more predictive of birth outcomes as women's work hours increase. Ultimately, our findings challenge the assumption that working more hours is the direct cause for employed women having fewer children and highlight the importance of career and fertility preferences in fertility outcomes.
NASA Astrophysics Data System (ADS)
Wang, Han; Yan, Jie; Liu, Yongqian; Han, Shuang; Li, Li; Zhao, Jing
2017-11-01
Increasing the accuracy of wind speed prediction lays solid foundation to the reliability of wind power forecasting. Most traditional correction methods for wind speed prediction establish the mapping relationship between wind speed of the numerical weather prediction (NWP) and the historical measurement data (HMD) at the corresponding time slot, which is free of time-dependent impacts of wind speed time series. In this paper, a multi-step-ahead wind speed prediction correction method is proposed with consideration of the passing effects from wind speed at the previous time slot. To this end, the proposed method employs both NWP and HMD as model inputs and the training labels. First, the probabilistic analysis of the NWP deviation for different wind speed bins is calculated to illustrate the inadequacy of the traditional time-independent mapping strategy. Then, support vector machine (SVM) is utilized as example to implement the proposed mapping strategy and to establish the correction model for all the wind speed bins. One Chinese wind farm in northern part of China is taken as example to validate the proposed method. Three benchmark methods of wind speed prediction are used to compare the performance. The results show that the proposed model has the best performance under different time horizons.
Univariate Time Series Prediction of Solar Power Using a Hybrid Wavelet-ARMA-NARX Prediction Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazaripouya, Hamidreza; Wang, Yubo; Chu, Chi-Cheng
This paper proposes a new hybrid method for super short-term solar power prediction. Solar output power usually has a complex, nonstationary, and nonlinear characteristic due to intermittent and time varying behavior of solar radiance. In addition, solar power dynamics is fast and is inertia less. An accurate super short-time prediction is required to compensate for the fluctuations and reduce the impact of solar power penetration on the power system. The objective is to predict one step-ahead solar power generation based only on historical solar power time series data. The proposed method incorporates discrete wavelet transform (DWT), Auto-Regressive Moving Average (ARMA)more » models, and Recurrent Neural Networks (RNN), while the RNN architecture is based on Nonlinear Auto-Regressive models with eXogenous inputs (NARX). The wavelet transform is utilized to decompose the solar power time series into a set of richer-behaved forming series for prediction. ARMA model is employed as a linear predictor while NARX is used as a nonlinear pattern recognition tool to estimate and compensate the error of wavelet-ARMA prediction. The proposed method is applied to the data captured from UCLA solar PV panels and the results are compared with some of the common and most recent solar power prediction methods. The results validate the effectiveness of the proposed approach and show a considerable improvement in the prediction precision.« less
Control theory for scanning probe microscopy revisited.
Stirling, Julian
2014-01-01
We derive a theoretical model for studying SPM feedback in the context of control theory. Previous models presented in the literature that apply standard models for proportional-integral-derivative controllers predict a highly unstable feedback environment. This model uses features specific to the SPM implementation of the proportional-integral controller to give realistic feedback behaviour. As such the stability of SPM feedback for a wide range of feedback gains can be understood. Further consideration of mechanical responses of the SPM system gives insight into the causes of exciting mechanical resonances of the scanner during feedback operation.
Analysis and Modeling of Ground Operations at Hub Airports
NASA Technical Reports Server (NTRS)
Atkins, Stephen (Technical Monitor); Andersson, Kari; Carr, Francis; Feron, Eric; Hall, William D.
2000-01-01
Building simple and accurate models of hub airports can considerably help one understand airport dynamics, and may provide quantitative estimates of operational airport improvements. In this paper, three models are proposed to capture the dynamics of busy hub airport operations. Two simple queuing models are introduced to capture the taxi-out and taxi-in processes. An integer programming model aimed at representing airline decision-making attempts to capture the dynamics of the aircraft turnaround process. These models can be applied for predictive purposes. They may also be used to evaluate control strategies for improving overall airport efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Fan; Parker, Jack C.; Brooks, Scott C
This study investigated sorption of uranium and technetium onto aluminum and iron hydroxides during titration of a contaminated groundwater using both Na hydroxide and carbonate as titrants. The contaminated groundwater has a low pH of 3.8 and high concentrations of NO3-, SO42-, Al, Ca, Mg, Mn, trace metals such as Ni and Co, and radionuclides such as U and Tc. During titration, most Al and Fe were precipitated out at pH above ~4.5. U as well as Tc was found to be removed from aqueous phase at pH below ~5.5, but to some extent released at higher pH values. Anmore » earlier geochemical equilibrium reaction path model that considered aqueous complexation and precipitation/dissolution reactions predicted mineral precipitation and adequately described concentration variations of Al, Fe and some other metal cations, but failed to predict sulfate, U and Tc concentrations during titration. Previous studies have shown that Fe- and Al-oxyhydroxides strongly sorb dissolved sulfate, U and Tc species. Therefore, an anion exchange model was developed for the sorption of sulfate, U and Tc onto Al and Fe hydroxides. With the additional consideration of the anion exchange reactions, concentration profiles of sulfate, U and Tc were more accurately predicted. Results of this study indicate that consideration of complex reactions such as sorption/desorption on mixed mineral phases, in addition to hydrolysis and precipitation, could improve the prediction of various contaminants during pre- and post-groundwater treatment practices.« less
Electronic stopping powers for heavy ions in SiC and SiO{sub 2}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, K.; Xue, H.; Zhang, Y., E-mail: Zhangy1@ornl.gov
2014-01-28
Accurate information on electronic stopping power is fundamental for broad advances in materials science, electronic industry, space exploration, and sustainable energy technologies. In the case of slow heavy ions in light targets, current codes and models provide significantly inconsistent predictions, among which the Stopping and Range of Ions in Matter (SRIM) code is the most commonly used one. Experimental evidence, however, has demonstrated considerable errors in the predicted ion and damage profiles based on SRIM stopping powers. In this work, electronic stopping powers for Cl, Br, I, and Au ions are experimentally determined in two important functional materials, SiC andmore » SiO{sub 2}, based on a single ion technique, and new electronic stopping power values are derived over the energy regime from 0 to 15 MeV, where large deviations from the SRIM predictions are observed. As an experimental validation, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC for energies from 700 keV to 15 MeV. The measured ion distributions by both RBS and SIMS are considerably deeper than the SRIM predictions, but agree well with predictions based on our derived stopping powers.« less
Electronic Stopping Powers For Heavy Ions In SiC And SiO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ke; Zhang, Y.; Zhu, Zihua
2014-01-24
Accurate information on electronic stopping power is fundamental for broad advances in materials science, electronic industry, space exploration, and sustainable energy technologies. In the case of slow heavy ions in light targets, current codes and models provide significantly inconsistent predictions, among which the Stopping and Range of Ions in Matter (SRIM) code is the most commonly used one. Experimental evidence, however, has demonstrated considerable errors in the predicted ion and damage profiles based on SRIM stopping powers. In this work, electronic stopping powers for Cl, Br, I, and Au ions are experimentally determined in two important functional materials, SiC andmore » SiO2, based on a single ion technique, and new electronic stopping power values are derived over the energy regime from 0 to 15 MeV, where large deviations from the SRIM predictions are observed. As an experimental validation, Rutherford backscattering spectrometry (RBS) and secondary ion mass spectrometry (SIMS) are utilized to measure the depth profiles of implanted Au ions in SiC for energies from 700 keV to 15MeV. The measured ion distributions by both RBS and SIMS are considerably deeper than the SRIM predictions, but agree well with predictions based on our derived stopping powers.« less
Modeling the 21 August 2017 Total Solar Eclipse: Prediction Results and New Techniques
NASA Astrophysics Data System (ADS)
Downs, C.; Mikic, Z.; Caplan, R. M.; Linker, J.; Lionello, R.; Torok, T.; Titov, V. S.; Riley, P.; MacKay, D.; Upton, L.
2017-12-01
As has been our tradition for past solar eclipses, we conducted a high resolution magnetohydrodynamic (MHD) simulation of the corona to predict the appearance of the 21 August 2017 solar eclipse. In this presentation, we discuss our model setup and our forward modeled predictions for the corona's appearance, including images of polarized brightness and EUV/soft X-Ray emission. We show how the combination of forward modeled observables and knowledge of the underlying magnetic field from the model can be used to interpret the structures seen during the eclipse. We also discuss two new features added to this year's prediction. First, in an attempt to improve the morphological shape of streamers in the low corona, we energize the large-scale magnetic field by emerging shear and canceling flux within filament channels. The handedness of the shear is deduced from a magnetofrictional model, which is driven by the evolving photospheric field produced by the Advective Flux Transport model. Second, we apply our new wave-turbulence-driven (WTD) model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the magnetic field--a key property for modeling the thermal-magnetic structure of the corona. We examine the effect of these considerations on forward modeled observables, and present them in the context of our final 2017 eclipse prediction (www.predsci.com/corona/aug2017eclipse). Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.
Edwards, T.C.; Cutler, D.R.; Zimmermann, N.E.; Geiser, L.; Moisen, Gretchen G.
2006-01-01
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys. ?? 2006 Elsevier B.V. All rights reserved.
The interface of protein structure, protein biophysics, and molecular evolution
Liberles, David A; Teichmann, Sarah A; Bahar, Ivet; Bastolla, Ugo; Bloom, Jesse; Bornberg-Bauer, Erich; Colwell, Lucy J; de Koning, A P Jason; Dokholyan, Nikolay V; Echave, Julian; Elofsson, Arne; Gerloff, Dietlind L; Goldstein, Richard A; Grahnen, Johan A; Holder, Mark T; Lakner, Clemens; Lartillot, Nicholas; Lovell, Simon C; Naylor, Gavin; Perica, Tina; Pollock, David D; Pupko, Tal; Regan, Lynne; Roger, Andrew; Rubinstein, Nimrod; Shakhnovich, Eugene; Sjölander, Kimmen; Sunyaev, Shamil; Teufel, Ashley I; Thorne, Jeffrey L; Thornton, Joseph W; Weinreich, Daniel M; Whelan, Simon
2012-01-01
Abstract The interface of protein structural biology, protein biophysics, molecular evolution, and molecular population genetics forms the foundations for a mechanistic understanding of many aspects of protein biochemistry. Current efforts in interdisciplinary protein modeling are in their infancy and the state-of-the art of such models is described. Beyond the relationship between amino acid substitution and static protein structure, protein function, and corresponding organismal fitness, other considerations are also discussed. More complex mutational processes such as insertion and deletion and domain rearrangements and even circular permutations should be evaluated. The role of intrinsically disordered proteins is still controversial, but may be increasingly important to consider. Protein geometry and protein dynamics as a deviation from static considerations of protein structure are also important. Protein expression level is known to be a major determinant of evolutionary rate and several considerations including selection at the mRNA level and the role of interaction specificity are discussed. Lastly, the relationship between modeling and needed high-throughput experimental data as well as experimental examination of protein evolution using ancestral sequence resurrection and in vitro biochemistry are presented, towards an aim of ultimately generating better models for biological inference and prediction. PMID:22528593
(PS)2: protein structure prediction server version 3.0.
Huang, Tsun-Tsao; Hwang, Jenn-Kang; Chen, Chu-Huang; Chu, Chih-Sheng; Lee, Chi-Wen; Chen, Chih-Chieh
2015-07-01
Protein complexes are involved in many biological processes. Examining coupling between subunits of a complex would be useful to understand the molecular basis of protein function. Here, our updated (PS)(2) web server predicts the three-dimensional structures of protein complexes based on comparative modeling; furthermore, this server examines the coupling between subunits of the predicted complex by combining structural and evolutionary considerations. The predicted complex structure could be indicated and visualized by Java-based 3D graphics viewers and the structural and evolutionary profiles are shown and compared chain-by-chain. For each subunit, considerations with or without the packing contribution of other subunits cause the differences in similarities between structural and evolutionary profiles, and these differences imply which form, complex or monomeric, is preferred in the biological condition for the subunit. We believe that the (PS)(2) server would be a useful tool for biologists who are interested not only in the structures of protein complexes but also in the coupling between subunits of the complexes. The (PS)(2) is freely available at http://ps2v3.life.nctu.edu.tw/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Taslimitehrani, Vahid; Dong, Guozhu; Pereira, Naveen L; Panahiazar, Maryam; Pathak, Jyotishman
2016-04-01
Computerized survival prediction in healthcare identifying the risk of disease mortality, helps healthcare providers to effectively manage their patients by providing appropriate treatment options. In this study, we propose to apply a classification algorithm, Contrast Pattern Aided Logistic Regression (CPXR(Log)) with the probabilistic loss function, to develop and validate prognostic risk models to predict 1, 2, and 5year survival in heart failure (HF) using data from electronic health records (EHRs) at Mayo Clinic. The CPXR(Log) constructs a pattern aided logistic regression model defined by several patterns and corresponding local logistic regression models. One of the models generated by CPXR(Log) achieved an AUC and accuracy of 0.94 and 0.91, respectively, and significantly outperformed prognostic models reported in prior studies. Data extracted from EHRs allowed incorporation of patient co-morbidities into our models which helped improve the performance of the CPXR(Log) models (15.9% AUC improvement), although did not improve the accuracy of the models built by other classifiers. We also propose a probabilistic loss function to determine the large error and small error instances. The new loss function used in the algorithm outperforms other functions used in the previous studies by 1% improvement in the AUC. This study revealed that using EHR data to build prediction models can be very challenging using existing classification methods due to the high dimensionality and complexity of EHR data. The risk models developed by CPXR(Log) also reveal that HF is a highly heterogeneous disease, i.e., different subgroups of HF patients require different types of considerations with their diagnosis and treatment. Our risk models provided two valuable insights for application of predictive modeling techniques in biomedicine: Logistic risk models often make systematic prediction errors, and it is prudent to use subgroup based prediction models such as those given by CPXR(Log) when investigating heterogeneous diseases. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Contreras, S.; Baugh, C. M.; Norberg, P.; Padilla, N.
2015-09-01
We demonstrate how the properties of a galaxy depend on the mass of its host dark matter subhalo, using two independent models of galaxy formation. For the cases of stellar mass and black hole mass, the median property value displays a monotonic dependence on subhalo mass. The slope of the relation changes for subhalo masses for which heating by active galactic nuclei becomes important. The median property values are predicted to be remarkably similar for central and satellite galaxies. The two models predict considerable scatter around the median property value, though the size of the scatter is model dependent. There is only modest evolution with redshift in the median galaxy property at a fixed subhalo mass. Properties such as cold gas mass and star formation rate, however, are predicted to have a complex dependence on subhalo mass. In these cases, subhalo mass is not a good indicator of the value of the galaxy property. We illustrate how the predictions in the galaxy property-subhalo mass plane differ from the assumptions made in some empirical models of galaxy clustering by reconstructing the model output using a basic subhalo abundance matching scheme. In its simplest form, abundance matching generally does not reproduce the clustering predicted by the models, typically resulting in an overprediction of the clustering signal. Using the predictions of the galaxy formation model for the correlations between pairs of galaxy properties, the basic abundance matching scheme can be extended to reproduce the model predictions more faithfully for a wider range of galaxy properties. Our results have implications for the analysis of galaxy clustering, particularly for low abundance samples.
Are We Predicting the Actual or Apparent Distribution of Temperate Marine Fishes?
Monk, Jacquomo; Ierodiaconou, Daniel; Harvey, Euan; Rattray, Alex; Versace, Vincent L.
2012-01-01
Planning for resilience is the focus of many marine conservation programs and initiatives. These efforts aim to inform conservation strategies for marine regions to ensure they have inbuilt capacity to retain biological diversity and ecological function in the face of global environmental change – particularly changes in climate and resource exploitation. In the absence of direct biological and ecological information for many marine species, scientists are increasingly using spatially-explicit, predictive-modeling approaches. Through the improved access to multibeam sonar and underwater video technology these models provide spatial predictions of the most suitable regions for an organism at resolutions previously not possible. However, sensible-looking, well-performing models can provide very different predictions of distribution depending on which occurrence dataset is used. To examine this, we construct species distribution models for nine temperate marine sedentary fishes for a 25.7 km2 study region off the coast of southeastern Australia. We use generalized linear model (GLM), generalized additive model (GAM) and maximum entropy (MAXENT) to build models based on co-located occurrence datasets derived from two underwater video methods (i.e. baited and towed video) and fine-scale multibeam sonar based seafloor habitat variables. Overall, this study found that the choice of modeling approach did not considerably influence the prediction of distributions based on the same occurrence dataset. However, greater dissimilarity between model predictions was observed across the nine fish taxa when the two occurrence datasets were compared (relative to models based on the same dataset). Based on these results it is difficult to draw any general trends in regards to which video method provides more reliable occurrence datasets. Nonetheless, we suggest predictions reflecting the species apparent distribution (i.e. a combination of species distribution and the probability of detecting it). Consequently, we also encourage researchers and marine managers to carefully interpret model predictions. PMID:22536325
NASA Astrophysics Data System (ADS)
Bennett, J.; David, R. E.; Wang, Q.; Li, M.; Shrestha, D. L.
2016-12-01
Flood forecasting in Australia has historically relied on deterministic forecasting models run only when floods are imminent, with considerable forecaster input and interpretation. These now co-existed with a continually available 7-day streamflow forecasting service (also deterministic) aimed at operational water management applications such as environmental flow releases. The 7-day service is not optimised for flood prediction. We describe progress on developing a system for ensemble streamflow forecasting that is suitable for both flood prediction and water management applications. Precipitation uncertainty is handled through post-processing of Numerical Weather Prediction (NWP) output with a Bayesian rainfall post-processor (RPP). The RPP corrects biases, downscales NWP output, and produces reliable ensemble spread. Ensemble precipitation forecasts are used to force a semi-distributed conceptual rainfall-runoff model. Uncertainty in precipitation forecasts is insufficient to reliably describe streamflow forecast uncertainty, particularly at shorter lead-times. We characterise hydrological prediction uncertainty separately with a 4-stage error model. The error model relies on data transformation to ensure residuals are homoscedastic and symmetrically distributed. To ensure streamflow forecasts are accurate and reliable, the residuals are modelled using a mixture-Gaussian distribution with distinct parameters for the rising and falling limbs of the forecast hydrograph. In a case study of the Murray River in south-eastern Australia, we show ensemble predictions of floods generally have lower errors than deterministic forecasting methods. We also discuss some of the challenges in operationalising short-term ensemble streamflow forecasts in Australia, including meeting the needs for accurate predictions across all flow ranges and comparing forecasts generated by event and continuous hydrological models.
Hydrocode predictions of collisional outcomes: Effects of target size
NASA Technical Reports Server (NTRS)
Ryan, Eileen V.; Asphaug, Erik; Melosh, H. J.
1991-01-01
Traditionally, laboratory impact experiments, designed to simulate asteroid collisions, attempted to establish a predictive capability for collisional outcomes given a particular set of initial conditions. Unfortunately, laboratory experiments are restricted to using targets considerably smaller than the modelled objects. It is therefore necessary to develop some methodology for extrapolating the extensive experimental results to the size regime of interest. Results are reported obtained through the use of two dimensional hydrocode based on 2-D SALE and modified to include strength effects and the fragmentation equations. The hydrocode was tested by comparing its predictions for post-impact fragment size distributions to those observed in laboratory impact experiments.
Orbit Determination for the Lunar Reconnaissance Orbiter Using an Extended Kalman Filter
NASA Technical Reports Server (NTRS)
Slojkowski, Steven; Lowe, Jonathan; Woodburn, James
2015-01-01
Orbit determination (OD) analysis results are presented for the Lunar Reconnaissance Orbiter (LRO) using a commercially available Extended Kalman Filter, Analytical Graphics' Orbit Determination Tool Kit (ODTK). Process noise models for lunar gravity and solar radiation pressure (SRP) are described and OD results employing the models are presented. Definitive accuracy using ODTK meets mission requirements and is better than that achieved using the operational LRO OD tool, the Goddard Trajectory Determination System (GTDS). Results demonstrate that a Vasicek stochastic model produces better estimates of the coefficient of solar radiation pressure than a Gauss-Markov model, and prediction accuracy using a Vasicek model meets mission requirements over the analysis span. Modeling the effect of antenna motion on range-rate tracking considerably improves residuals and filter-smoother consistency. Inclusion of off-axis SRP process noise and generalized process noise improves filter performance for both definitive and predicted accuracy. Definitive accuracy from the smoother is better than achieved using GTDS and is close to that achieved by precision OD methods used to generate definitive science orbits. Use of a multi-plate dynamic spacecraft area model with ODTK's force model plugin capability provides additional improvements in predicted accuracy.
Soil Moisture Memory in Climate Models
NASA Technical Reports Server (NTRS)
Koster, Randal D.; Suarez, Max J.; Zukor, Dorothy J. (Technical Monitor)
2000-01-01
Water balance considerations at the soil surface lead to an equation that relates the autocorrelation of soil moisture in climate models to (1) seasonality in the statistics of the atmospheric forcing, (2) the variation of evaporation with soil moisture, (3) the variation of runoff with soil moisture, and (4) persistence in the atmospheric forcing, as perhaps induced by land atmosphere feedback. Geographical variations in the relative strengths of these factors, which can be established through analysis of model diagnostics and which can be validated to a certain extent against observations, lead to geographical variations in simulated soil moisture memory and thus, in effect, to geographical variations in seasonal precipitation predictability associated with soil moisture. The use of the equation to characterize controls on soil moisture memory is demonstrated with data from the modeling system of the NASA Seasonal-to-Interannual Prediction Project.
Fawsitt, Christopher G; Bourke, Jane; Greene, Richard A; McElroy, Brendan; Krucien, Nicolas; Murphy, Rosemary; Lutomski, Jennifer E
2017-11-01
In many countries, there has been a considerable shift towards providing a more woman-centred maternity service, which affords greater consumer choice. Maternity service provision in Ireland is set to follow this trend with policymakers committed to improving maternal choice at hospital level. However, women's preferences for maternity care are unknown, as is the expected demand for new services. In this paper, we used a discrete choice experiment (DCE) to (1) investigate women's strengths of preference for different features of maternity care; (2) predict market uptake for consultant- and midwifery-led care, and a hybrid model of care called the Domiciliary In and Out of Hospital Care scheme; and (3) calculate the welfare change arising from the provision of these services. Women attending antenatal care across two teaching hospitals in Ireland were invited to participate in the study. Women's preferred model of care resembled the hybrid model of care, with considerably more women expected to utilise this service than either consultant- or midwifery-led care. The benefit of providing all three services proved considerably greater than the benefit of providing two or fewer services. From a priority setting perspective, pursuing all three models of care would generate a considerable welfare gain, although the cost-effectiveness of such an approach needs to be considered. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.
Bauer, Julia; Chen, Wenjing; Nischwitz, Sebastian; Liebl, Jakob; Rieken, Stefan; Welzel, Thomas; Debus, Juergen; Parodi, Katia
2018-04-24
A reliable Monte Carlo prediction of proton-induced brain tissue activation used for comparison to particle therapy positron-emission-tomography (PT-PET) measurements is crucial for in vivo treatment verification. Major limitations of current approaches to overcome include the CT-based patient model and the description of activity washout due to tissue perfusion. Two approaches were studied to improve the activity prediction for brain irradiation: (i) a refined patient model using tissue classification based on MR information and (ii) a PT-PET data-driven refinement of washout model parameters. Improvements of the activity predictions compared to post-treatment PT-PET measurements were assessed in terms of activity profile similarity for six patients treated with a single or two almost parallel fields delivered by active proton beam scanning. The refined patient model yields a generally higher similarity for most of the patients, except in highly pathological areas leading to tissue misclassification. Using washout model parameters deduced from clinical patient data could considerably improve the activity profile similarity for all patients. Current methods used to predict proton-induced brain tissue activation can be improved with MR-based tissue classification and data-driven washout parameters, thus providing a more reliable basis for PT-PET verification. Copyright © 2018 Elsevier B.V. All rights reserved.
Predicting Electrostatic Forces in RNA Folding
Tan, Zhi-Jie; Chen, Shi-Jie
2016-01-01
Metal ion-mediated electrostatic interactions are critical to RNA folding. Although considerable progress has been made in mechanistic studies, the problem of accurate predictions for the ion effects in RNA folding remains unsolved, mainly due to the complexity of several potentially important issues such as ion correlation and dehydration effects. In this chapter, after giving a brief overview of the experimental findings and theoretical approaches, we focus on a recently developed new model, the tightly bound ion (TBI) model, for ion electrostatics in RNA folding. The model is unique because it can treat ion correlation and fluctuation effects for realistic RNA 3D structures. For monovalent ion (such as Na+) solutions, where ion correlation is weak, TBI and the Poisson–Boltzmann (PB) theory give the same results and the results agree with the experimental data. For multivalent ion (such as Mg2+) solutions, where ion correlation can be strong, however, TBI gives much improved predictions than the PB. Moreover, the model suggests an ion correlation- induced mechanism for the unusual efficiency of Mg2+ ions in the stabilization of RNA tertiary folds. In this chapter, after introducing the theoretical framework of the TBI model, we will describe how to apply the model to predict ion-binding properties and ion-dependent folding stabilities. PMID:20946803
NASA Astrophysics Data System (ADS)
Ji, Zhaojie; Guan, Zhidong; Li, Zengshan
2017-10-01
In this paper, a progressive damage model was established on the basis of ABAQUS software for predicting permanent indentation and impact damage in composite laminates. Intralaminar and interlaminar damage was modelled based on the continuum damage mechanics (CDM) in the finite element model. For the verification of the model, low-velocity impact tests of quasi-isotropic laminates with material system of T300/5228A were conducted. Permanent indentation and impact damage of the laminates were simulated and the numerical results agree well with the experiments. It can be concluded that an obvious knee point can be identified on the curve of the indentation depth versus impact energy. Matrix cracking and delamination develops rapidly with the increasing impact energy, while considerable amount of fiber breakage only occurs when the impact energy exceeds the energy corresponding to the knee point. Predicted indentation depth after the knee point is very sensitive to the parameter μ which is proposed in this paper, and the acceptable value of this parameter is in range from 0.9 to 1.0.
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
NASA Astrophysics Data System (ADS)
Matouš, Karel; Geers, Marc G. D.; Kouznetsova, Varvara G.; Gillman, Andrew
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platform in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.
Time series modelling of increased soil temperature anomalies during long period
NASA Astrophysics Data System (ADS)
Shirvani, Amin; Moradi, Farzad; Moosavi, Ali Akbar
2015-10-01
Soil temperature just beneath the soil surface is highly dynamic and has a direct impact on plant seed germination and is probably the most distinct and recognisable factor governing emergence. Autoregressive integrated moving average as a stochastic model was developed to predict the weekly soil temperature anomalies at 10 cm depth, one of the most important soil parameters. The weekly soil temperature anomalies for the periods of January1986-December 2011 and January 2012-December 2013 were taken into consideration to construct and test autoregressive integrated moving average models. The proposed model autoregressive integrated moving average (2,1,1) had a minimum value of Akaike information criterion and its estimated coefficients were different from zero at 5% significance level. The prediction of the weekly soil temperature anomalies during the test period using this proposed model indicated a high correlation coefficient between the observed and predicted data - that was 0.99 for lead time 1 week. Linear trend analysis indicated that the soil temperature anomalies warmed up significantly by 1.8°C during the period of 1986-2011.
A review of predictive nonlinear theories for multiscale modeling of heterogeneous materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matouš, Karel, E-mail: kmatous@nd.edu; Geers, Marc G.D.; Kouznetsova, Varvara G.
2017-02-01
Since the beginning of the industrial age, material performance and design have been in the midst of innovation of many disruptive technologies. Today's electronics, space, medical, transportation, and other industries are enriched by development, design and deployment of composite, heterogeneous and multifunctional materials. As a result, materials innovation is now considerably outpaced by other aspects from component design to product cycle. In this article, we review predictive nonlinear theories for multiscale modeling of heterogeneous materials. Deeper attention is given to multiscale modeling in space and to computational homogenization in addressing challenging materials science questions. Moreover, we discuss a state-of-the-art platformmore » in predictive image-based, multiscale modeling with co-designed simulations and experiments that executes on the world's largest supercomputers. Such a modeling framework consists of experimental tools, computational methods, and digital data strategies. Once fully completed, this collaborative and interdisciplinary framework can be the basis of Virtual Materials Testing standards and aids in the development of new material formulations. Moreover, it will decrease the time to market of innovative products.« less
The importance of radiation for semiempirical water-use efficiency models
Boese, Sven; Jung, Martin; Carvalhais, Nuno; ...
2017-06-22
Water-use efficiency (WUE) is a fundamental property for the coupling of carbon and water cycles in plants and ecosystems. Existing model formulations predicting this variable differ in the type of response of WUE to the atmospheric vapor pressure deficit of water (VPD). We tested a representative WUE model on the ecosystem scale at 110 eddy covariance sites of the FLUXNET initiative by predicting evapotranspiration (ET) based on gross primary productivity (GPP) and VPD. We found that introducing an intercept term in the formulation increases model performance considerably, indicating that an additional factor needs to be considered. We demonstrate that thismore » intercept term varies seasonally and we subsequently associate it with radiation. Replacing the constant intercept term with a linear function of global radiation was found to further improve model predictions of ET. Our new semiempirical ecosystem WUE formulation indicates that, averaged over all sites, this radiation term accounts for up to half (39–47 %) of transpiration. These empirical findings challenge the current understanding of water-use efficiency on the ecosystem scale.« less
NASA Astrophysics Data System (ADS)
Moore, K.; Pierson, D.; Pettersson, K.; Naden, P.; Allott, N.; Jennings, E.; Tamm, T.; Järvet, A.; Nickus, U.; Thies, H.; Arvola, L.; Järvinen, M.; Schneiderman, E.; Zion, M.; Lounsbury, D.
2004-05-01
We are applying an existing watershed model in the EU CLIME (Climate and Lake Impacts in Europe) project to evaluate the effects of weather on seasonal and annual delivery of N, P, and DOC to lakes. Model calibration is based on long-term records of weather and water quality data collected from sites in different climatic regions spread across Europe and in New York State. The overall aim of the CLIME project is to develop methods and models to support lake and catchment management under current climate conditions and make predictions under future climate scenarios. Scientists from 10 partner countries are collaborating on developing a consistent approach to defining model parameters for the Generalized Watershed Loading Functions (GWLF) model, one of a larger suite of models used in the project. An example of the approach for the hydrological portion of the GWLF model will be presented, with consideration of the balance between model simplicity, ease of use, data requirements, and realistic predictions.
METCAN: The metal matrix composite analyzer
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Murthy, Pappu L. N.
1988-01-01
Metal matrix composites (MMC) are the subject of intensive study and are receiving serious consideration for critical structural applications in advanced aerospace systems. MMC structural analysis and design methodologies are studied. Predicting the mechanical and thermal behavior and the structural response of components fabricated from MMC requires the use of a variety of mathematical models. These models relate stresses to applied forces, stress intensities at the tips of cracks to nominal stresses, buckling resistance to applied force, or vibration response to excitation forces. The extensive research in computational mechanics methods for predicting the nonlinear behavior of MMC are described. This research has culminated in the development of the METCAN (METal Matrix Composite ANalyzer) computer code.
NASA Astrophysics Data System (ADS)
Allis, Damian G.; Hakey, Patrick M.; Korter, Timothy M.
2008-10-01
The terahertz (THz, far-infrared) spectrum of 3,4-methylene-dioxymethamphetamine hydrochloride (Ecstasy) is simulated using solid-state density functional theory. While a previously reported isolated-molecule calculation is noteworthy for the precision of its solid-state THz reproduction, the solid-state calculation predicts that the isolated-molecule modes account for only half of the spectral features in the THz region, with the remaining structure arising from lattice vibrations that cannot be predicted without solid-state molecular modeling. The molecular origins of the internal mode contributions to the solid-state THz spectrum, as well as the proper consideration of the protonation state of the molecule, are also considered.
Estimation of the curvature of the solid liquid interface during Bridgman crystal growth
NASA Astrophysics Data System (ADS)
Barat, Catherine; Duffar, Thierry; Garandet, Jean-Paul
1998-11-01
An approximate solution for the solid/liquid interface curvature due to the crucible effect in crystal growth is derived from simple heat flux considerations. The numerical modelling of the problem carried out with the help of the finite element code FIDAP supports the predictions of our analytical expression and allows to identify its range of validity. Experimental interface curvatures, measured in gallium antimonide samples grown by the vertical Bridgman method, are seen to compare satisfactorily to analytical and numerical results. Other literature data are also in fair agreement with the predictions of our models in the case where the amount of heat carried by the crucible is small compared to the overall heat flux.
Establishing best practise in the application of expert review of mutagenicity under ICH M7.
Barber, Chris; Amberg, Alexander; Custer, Laura; Dobo, Krista L; Glowienke, Susanne; Van Gompel, Jacky; Gutsell, Steve; Harvey, Jim; Honma, Masamitsu; Kenyon, Michelle O; Kruhlak, Naomi; Muster, Wolfgang; Stavitskaya, Lidiya; Teasdale, Andrew; Vessey, Jonathan; Wichard, Joerg
2015-10-01
The ICH M7 guidelines for the assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals allows for the consideration of in silico predictions in place of in vitro studies. This represents a significant advance in the acceptance of (Q)SAR models and has resulted from positive interactions between modellers, regulatory agencies and industry with a shared purpose of developing effective processes to minimise risk. This paper discusses key scientific principles that should be applied when evaluating in silico predictions with a focus on accuracy and scientific rigour that will support a consistent and practical route to regulatory submission. Copyright © 2015 Elsevier Inc. All rights reserved.
Slade, Eric P.; Becker, Kimberly D.
2014-01-01
This paper discusses the steps and decisions involved in proximal-distal economic modeling, in which social, behavioral, and academic outcomes data for children may be used to inform projections of the economic consequences of interventions. Economic projections based on proximal-distal modeling techniques may be used in cost-benefit analyses when information is unavailable for certain long term outcomes data in adulthood or to build entire cost-benefit analyses. Although examples of proximal-distal economic analyses of preventive interventions exist in policy reports prepared for governmental agencies, such analyses have rarely been completed in conjunction with research trials. The modeling decisions on which these prediction models are based are often opaque to policymakers and other end-users. This paper aims to illuminate some of the key steps and considerations involved in constructing proximal-distal prediction models and to provide examples and suggestions that may help guide future proximal-distal analyses. PMID:24337979
Kaklamanos, James; Baise, Laurie G.; Boore, David M.
2011-01-01
The ground-motion prediction equations (GMPEs) developed as part of the Next Generation Attenuation of Ground Motions (NGA-West) project in 2008 are becoming widely used in seismic hazard analyses. However, these new models are considerably more complicated than previous GMPEs, and they require several more input parameters. When employing the NGA models, users routinely face situations in which some of the required input parameters are unknown. In this paper, we present a framework for estimating the unknown source, path, and site parameters when implementing the NGA models in engineering practice, and we derive geometrically-based equations relating the three distance measures found in the NGA models. Our intent is for the content of this paper not only to make the NGA models more accessible, but also to help with the implementation of other present or future GMPEs.
Denys, S; Van Loey, A M; Hendrickx, M E
2000-01-01
A numerical heat transfer model for predicting product temperature profiles during high-pressure thawing processes was recently proposed by the authors. In the present work, the predictive capacity of the model was considerably improved by taking into account the pressure dependence of the latent heat of the product that was used (Tylose). The effect of pressure on the latent heat of Tylose was experimentally determined by a series of freezing experiments conducted at different pressure levels. By combining a numerical heat transfer model for freezing processes with a least sum of squares optimization procedure, the corresponding latent heat at each pressure level was estimated, and the obtained pressure relation was incorporated in the original high-pressure thawing model. Excellent agreement with the experimental temperature profiles for both high-pressure freezing and thawing was observed.
NASA Astrophysics Data System (ADS)
Novelo-Casanova, D. A.; Valdés-González, C.
2008-10-01
Using pattern recognition techniques, we formulate a simple prediction rule for a retrospective prediction of the three last largest eruptions of the Popocatépetl, Mexico, volcano that occurred on 23 April-30 June 1997 (Eruption 1; VEI ~ 2-3); 11 December 2000-23 January 2001 (Eruption 2; VEI ~ 3-4) and 7 June-4 September 2002 (Eruption 3; explosive dome extrusion and destruction phase). Times of Increased Probability (TIP) were estimated from the seismicity recorded by the local seismic network from 1 January 1995 to 31 December 2005. A TIP is issued when a cluster of seismic events occurs under our algorithm considerations in a temporal window several days (or weeks) prior to large volcanic activity providing sufficient time to organize an effective alert strategy. The best predictions of the three analyzed eruptions were obtained when averaging seismicity rate over a 5-day window with a threshold value of 12 events and declaring an alarm for 45 days. A TIP was issued about six weeks before Eruption 1. TIPs were detected about one and four weeks before Eruptions 2 and 3, respectively. According to our objectives, in all cases, the observed TIPs would have allowed the development of an effective civil protection strategy. Although, under our model considerations the three eruptive events were successfully predicted, one false alarm was also issued by our algorithm. An analysis of the epicentral and depth distribution of the local seismicity used by our prediction rule reveals that successful TIPs were issued from microearthquakes that took place below and towards SE of the crater. On the contrary, the seismicity that issued the observed false alarm was concentrated below the summit of the volcano. We conclude that recording of precursory seismicity below and SE of the crater together with detection of TIPs as described here, could become an important tool to predict future large eruptions at Popocatépetl. Although our model worked well for events that occurred in the past, it is necessary to verify the real capability of the model for future eruptive events.
Human Thermal Model Evaluation Using the JSC Human Thermal Database
NASA Technical Reports Server (NTRS)
Bue, Grant; Makinen, Janice; Cognata, Thomas
2012-01-01
Human thermal modeling has considerable long term utility to human space flight. Such models provide a tool to predict crew survivability in support of vehicle design and to evaluate crew response in untested space environments. It is to the benefit of any such model not only to collect relevant experimental data to correlate it against, but also to maintain an experimental standard or benchmark for future development in a readily and rapidly searchable and software accessible format. The Human thermal database project is intended to do just so; to collect relevant data from literature and experimentation and to store the data in a database structure for immediate and future use as a benchmark to judge human thermal models against, in identifying model strengths and weakness, to support model development and improve correlation, and to statistically quantify a model s predictive quality. The human thermal database developed at the Johnson Space Center (JSC) is intended to evaluate a set of widely used human thermal models. This set includes the Wissler human thermal model, a model that has been widely used to predict the human thermoregulatory response to a variety of cold and hot environments. These models are statistically compared to the current database, which contains experiments of human subjects primarily in air from a literature survey ranging between 1953 and 2004 and from a suited experiment recently performed by the authors, for a quantitative study of relative strength and predictive quality of the models.
Wolf, Matthew B
2002-12-01
To show that a three-pathway pore model can describe extensive transport data in cat and rat skeletal muscle microvascular beds and in frog mesenteric microvessels. A three-pathway pore model was used to predict transport data measured in various microcirculatory preparations. The pathways consist of 4- and 24-nm radii pore systems with a 2.5:1 ratio of hydraulic conductivities and a water-only pathway of variable conductivity. The pore sizes and relative hydraulic conductivities of the small- and large-pore systems were derived from a model fit to reflection coefficient (sigma) data in the cat hindlimb. The fraction (alpha(w)) of total hydraulic conductivity (L(p)) or hydraulic capacity (L(p)S) contributed by the water-only pathway was uniquely determined for each preparation by a fit of the three-pathway model (parameters fixed as above) to sigma data measured in that preparation. These parameter values were unchanged when the model was used to predict diffusion capacity (permeability-surface area product, P(d)S) data in the cat or rat preparations or diffusional permeability (P(d)) data in frog microvessels. The values for L(p) or L(p)S used to predict diffusional data in each preparation were taken from the literature. Predictions of P(d) ratios for solute pairs were also compared with experimental data. The three-pathway model closely predicted the trend of P(d)S or P(d) experimental data in all three preparations; in general, predicted P(d) ratios for paired solutes were quite similar to experimental data. For these comparisons, the only parameter varied between these preparations was alpha(w). It varied considerably, from 7 to 16 to 41% of total in frog, rat, and cat preparations. Individual P(d)S or P(d) experimental data were closely predicted in the cat but somewhat overestimated in the frog and rat. This result could be due the use of L(p) or L(p)S values in the model that were affected by methodological problems. Calculated hydraulic conductivities of the water-only pathway in the three preparations were quite similar. : These results support the hypothesis of a common structure of the transmembrane pathways in these three, very different, microcirculatory preparations. What varies considerably between them is the total number of solute-conducting pathways, but not their dimensions, nor the hydraulic conductivities of their water-only pathways. Because of the wide variation of alpha(w) among these preparations, the ratio of P(d) to L(p) for any solute is not constant, but the deviation from constancy may not be detectable because of errors in the experimental data.
Tests and comparisons of gravity models.
NASA Technical Reports Server (NTRS)
Marsh, J. G.; Douglas, B. C.
1971-01-01
Optical observations of the GEOS satellites were used to obtain orbital solutions with different sets of geopotential coefficients. The solutions were compared before and after modification to high order terms (necessary because of resonance) and were then analyzed by comparing subsequent observations with predicted trajectories. The most important source of error in orbit determination and prediction for the GEOS satellites is the effect of resonance found in most published sets of geopotential coefficients. Modifications to the sets yield greatly improved orbits in most cases. The results of these comparisons suggest that with the best optical tracking systems and gravity models, satellite position error due to gravity model uncertainty can reach 50-100 m during a heavily observed 5-6 day orbital arc. If resonant coefficients are estimated, the uncertainty is reduced considerably.
Multiannual forecasting of seasonal influenza dynamics reveals climatic and evolutionary drivers.
Axelsen, Jacob Bock; Yaari, Rami; Grenfell, Bryan T; Stone, Lewi
2014-07-01
Human influenza occurs annually in most temperate climatic zones of the world, with epidemics peaking in the cold winter months. Considerable debate surrounds the relative role of epidemic dynamics, viral evolution, and climatic drivers in driving year-to-year variability of outbreaks. The ultimate test of understanding is prediction; however, existing influenza models rarely forecast beyond a single year at best. Here, we use a simple epidemiological model to reveal multiannual predictability based on high-quality influenza surveillance data for Israel; the model fit is corroborated by simple metapopulation comparisons within Israel. Successful forecasts are driven by temperature, humidity, antigenic drift, and immunity loss. Essentially, influenza dynamics are a balance between large perturbations following significant antigenic jumps, interspersed with nonlinear epidemic dynamics tuned by climatic forcing.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Hugh D.; Eisfeld, Amie J.; Sims, Amy
Respiratory infections stemming from influenza viruses and the Severe Acute Respiratory Syndrome corona virus (SARS-CoV) represent a serious public health threat as emerging pandemics. Despite efforts to identify the critical interactions of these viruses with host machinery, the key regulatory events that lead to disease pathology remain poorly targeted with therapeutics. Here we implement an integrated network interrogation approach, in which proteome and transcriptome datasets from infection of both viruses in human lung epithelial cells are utilized to predict regulatory genes involved in the host response. We take advantage of a novel “crowd-based” approach to identify and combine ranking metricsmore » that isolate genes/proteins likely related to the pathogenicity of SARS-CoV and influenza virus. Subsequently, a multivariate regression model is used to compare predicted lung epithelial regulatory influences with data derived from other respiratory virus infection models. We predicted a small set of regulatory factors with conserved behavior for consideration as important components of viral pathogenesis that might also serve as therapeutic targets for intervention. Our results demonstrate the utility of integrating diverse ‘omic datasets to predict and prioritize regulatory features conserved across multiple pathogen infection models.« less
The acceptance of in silico models for REACH: Requirements, barriers, and perspectives
2011-01-01
In silico models have prompted considerable interest and debate because of their potential value in predicting the properties of chemical substances for regulatory purposes. The European REACH legislation promotes innovation and encourages the use of alternative methods, but in practice the use of in silico models is still very limited. There are many stakeholders influencing the regulatory trajectory of quantitative structure-activity relationships (QSAR) models, including regulators, industry, model developers and consultants. Here we outline some of the issues and challenges involved in the acceptance of these methods for regulatory purposes. PMID:21982269
Predicting life satisfaction of the Angolan elderly: a structural model.
Gutiérrez, M; Tomás, J M; Galiana, L; Sancho, P; Cebrià, M A
2013-01-01
Satisfaction with life is of particular interest in the study of old age well-being because it has arisen as an important component of old age. A considerable amount of research has been done to explain life satisfaction in the elderly, and there is growing empirical evidence on best predictors of life satisfaction. This research evaluates the predictive power of some aging process variables, on Angolan elderly people's life satisfaction, while including perceived health into the model. Data for this research come from a cross-sectional survey of elderly people living in the capital of Angola, Luanda. A total of 1003 Angolan elderly were surveyed on socio-demographic information, perceived health, active engagement, generativity, and life satisfaction. A Multiple Indicators Multiple Causes model was built to test variables' predictive power on life satisfaction. The estimated theoretical model fitted the data well. The main predictors were those related to active engagement with others. Perceived health also had a significant and positive effect on life satisfaction. Several processes together may predict life satisfaction in the elderly population of Angola, and the variance accounted for it is large enough to be considered relevant. The key factor associated to life satisfaction seems to be active engagement with others.
Taylor, Zeike A; Kirk, Thomas B; Miller, Karol
2007-10-01
The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.
Real-time emissions from construction equipment compared with model predictions.
Heidari, Bardia; Marr, Linsey C
2015-02-01
The construction industry is a large source of greenhouse gases and other air pollutants. Measuring and monitoring real-time emissions will provide practitioners with information to assess environmental impacts and improve the sustainability of construction. We employed a portable emission measurement system (PEMS) for real-time measurement of carbon dioxide (CO), nitrogen oxides (NOx), hydrocarbon, and carbon monoxide (CO) emissions from construction equipment to derive emission rates (mass of pollutant emitted per unit time) and emission factors (mass of pollutant emitted per unit volume of fuel consumed) under real-world operating conditions. Measurements were compared with emissions predicted by methodologies used in three models: NONROAD2008, OFFROAD2011, and a modal statistical model. Measured emission rates agreed with model predictions for some pieces of equipment but were up to 100 times lower for others. Much of the difference was driven by lower fuel consumption rates than predicted. Emission factors during idling and hauling were significantly different from each other and from those of other moving activities, such as digging and dumping. It appears that operating conditions introduce considerable variability in emission factors. Results of this research will aid researchers and practitioners in improving current emission estimation techniques, frameworks, and databases.
Breen, Michael; Xu, Yadong; Schneider, Alexandra; Williams, Ronald; Devlin, Robert
2018-06-01
Air pollution epidemiology studies of ambient fine particulate matter (PM 2.5 ) often use outdoor concentrations as exposure surrogates, which can induce exposure error. The goal of this study was to improve ambient PM 2.5 exposure assessments for a repeated measurements study with 22 diabetic individuals in central North Carolina called the Diabetes and Environment Panel Study (DEPS) by applying the Exposure Model for Individuals (EMI), which predicts five tiers of individual-level exposure metrics for ambient PM 2.5 using outdoor concentrations, questionnaires, weather, and time-location information. Using EMI, we linked a mechanistic air exchange rate (AER) model to a mass-balance PM 2.5 infiltration model to predict residential AER (Tier 1), infiltration factors (F inf_home , Tier 2), indoor concentrations (C in , Tier 3), personal exposure factors (F pex , Tier 4), and personal exposures (E, Tier 5) for ambient PM 2.5 . We applied EMI to predict daily PM 2.5 exposure metrics (Tiers 1-5) for 174 participant-days across the 13 months of DEPS. Individual model predictions were compared to a subset of daily measurements of F pex and E (Tiers 4-5) from the DEPS participants. Model-predicted F pex and E corresponded well to daily measurements with a median difference of 14% and 23%; respectively. Daily model predictions for all 174 days showed considerable temporal and house-to-house variability of AER, F inf_home , and C in (Tiers 1-3), and person-to-person variability of F pex and E (Tiers 4-5). Our study demonstrates the capability of predicting individual-level ambient PM 2.5 exposure metrics for an epidemiological study, in support of improving risk estimation. Copyright © 2018. Published by Elsevier B.V.
Sorting protein decoys by machine-learning-to-rank
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-01-01
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset. PMID:27530967
Sorting protein decoys by machine-learning-to-rank.
Jing, Xiaoyang; Wang, Kai; Lu, Ruqian; Dong, Qiwen
2016-08-17
Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset.
Predicting early cognitive decline in newly-diagnosed Parkinson's patients: A practical model.
Hogue, Olivia; Fernandez, Hubert H; Floden, Darlene P
2018-06-19
To create a multivariable model to predict early cognitive decline among de novo patients with Parkinson's disease, using brief, inexpensive assessments that are easily incorporated into clinical flow. Data for 351 drug-naïve patients diagnosed with idiopathic Parkinson's disease were obtained from the Parkinson's Progression Markers Initiative. Baseline demographic, disease history, motor, and non-motor features were considered as candidate predictors. Best subsets selection was used to determine the multivariable baseline symptom profile that most accurately predicted individual cognitive decline within three years. Eleven per cent of the sample experienced cognitive decline. The final logistic regression model predicting decline included five baseline variables: verbal memory retention, right-sided bradykinesia, years of education, subjective report of cognitive impairment, and REM behavior disorder. Model discrimination was good (optimism-adjusted concordance index = .749). The associated nomogram provides a tool to determine individual patient risk of meaningful cognitive change in the early stages of the disease. Through the consideration of easily-implemented or routinely-gathered assessments, we have identified a multidimensional baseline profile and created a convenient, inexpensive tool to predict cognitive decline in the earliest stages of Parkinson's disease. The use of this tool would generate prediction at the individual level, allowing clinicians to tailor medical management for each patient and identify at-risk patients for clinical trials aimed at disease modifying therapies. Copyright © 2018. Published by Elsevier Ltd.
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
Predicting drug-induced liver injury in human with Naïve Bayes classifier approach.
Zhang, Hui; Ding, Lan; Zou, Yi; Hu, Shui-Qing; Huang, Hai-Guo; Kong, Wei-Bao; Zhang, Ji
2016-10-01
Drug-induced liver injury (DILI) is one of the major safety concerns in drug development. Although various toxicological studies assessing DILI risk have been developed, these methods were not sufficient in predicting DILI in humans. Thus, developing new tools and approaches to better predict DILI risk in humans has become an important and urgent task. In this study, we aimed to develop a computational model for assessment of the DILI risk with using a larger scale human dataset and Naïve Bayes classifier. The established Naïve Bayes prediction model was evaluated by 5-fold cross validation and an external test set. For the training set, the overall prediction accuracy of the 5-fold cross validation was 94.0 %. The sensitivity, specificity, positive predictive value and negative predictive value were 97.1, 89.2, 93.5 and 95.1 %, respectively. The test set with the concordance of 72.6 %, sensitivity of 72.5 %, specificity of 72.7 %, positive predictive value of 80.4 %, negative predictive value of 63.2 %. Furthermore, some important molecular descriptors related to DILI risk and some toxic/non-toxic fragments were identified. Thus, we hope the prediction model established here would be employed for the assessment of human DILI risk, and the obtained molecular descriptors and substructures should be taken into consideration in the design of new candidate compounds to help medicinal chemists rationally select the chemicals with the best prospects to be effective and safe.
NASA Astrophysics Data System (ADS)
Wang, Guiling
2005-12-01
This study examines the impact of greenhouse gas warming on soil moisture based on predictions of 15 global climate models by comparing the after-stabilization climate in the SRESA1b experiment with the pre-industrial control climate. The models are consistent in predicting summer dryness and winter wetness in only part of the northern middle and high latitudes. Slightly over half of the models predict year-round wetness in central Eurasia and/or year-round dryness in Siberia and mid-latitude Northeast Asia. One explanation is offered that relates such lack of seasonality to the carryover effect of soil moisture storage from season to season. In the tropics and subtropics, a decrease of soil moisture is the dominant response. The models are especially consistent in predicting drier soil over the southwest North America, Central America, the Mediterranean, Australia, and the South Africa in all seasons, and over much of the Amazon and West Africa in the June July August (JJA) season and the Asian monsoon region in the December January February (DJF) season. Since the only major areas of future wetness predicted with a high level of model consistency are part of the northern middle and high latitudes during the non-growing season, it is suggested that greenhouse gas warming will cause a worldwide agricultural drought. Over regions where there is considerable consistency among the analyzed models in predicting the sign of soil moisture changes, there is a wide range of magnitudes of the soil moisture response, indicating a high degree of model dependency in terrestrial hydrological sensitivity. A major part of the inter-model differences in the sensitivity of soil moisture response are attributable to differences in land surface parameterization.
ERIC Educational Resources Information Center
Traxler, Matthew J.
2009-01-01
An eye-movement monitoring experiment investigated readers' response to temporarily ambiguous sentences. The sentences were ambiguous because a relative clause could attach to one of two preceding nouns. Semantic information disambiguated the sentences. Working memory considerations predict an overall preference for the second of the two nouns, as…
ERIC Educational Resources Information Center
Fynn, Angelo
2016-01-01
The prediction and classification of student performance has always been a central concern within higher education institutions. It is therefore natural for higher education institutions to harvest and analyse student data to inform decisions on education provision in resource constrained South African environments. One of the drivers for the use…
ERIC Educational Resources Information Center
González-Brenes, José P.; Huang, Yun
2015-01-01
Classification evaluation metrics are often used to evaluate adaptive tutoring systems-- programs that teach and adapt to humans. Unfortunately, it is not clear how intuitive these metrics are for practitioners with little machine learning background. Moreover, our experiments suggest that existing convention for evaluating tutoring systems may…
Predictive toxicity models (in vitro to in vivo, QSAR, read-across) rely on large amounts of accurate in vivo data. Here, we analyze the quality of in vivo data from the Toxicity Reference Database (ToxRefDB), using chemical-induced anemia as an example. Considerations include v...
ERIC Educational Resources Information Center
Ulriksen, Robin; Sagatun, Åse; Zachrisson, Henrik Daae; Waaktaar, Trine; Lervåg, Arne Ola
2015-01-01
Social support and socioeconomic status (SES) have received considerable attention in explaining academic achievement and the achievement gap between students with ethic majority and immigrant background, and between boys and girls. Using a Structural Equation Modeling approach we examine (1) if there exist a gap in school achievements between…
Patrick A. Zollner; L. Jay Roberts; Eric J. Gustafson; Hong S. He; Volker Radeloff
2008-01-01
Incorporating an ecosystem management perspective into forest planning requires consideration of the impacts of timber management on a suite of landscape characteristics at broad spatial and long temporal scales. We used the LANDIS forest landscape simulation model to predict forest composition and landscape pattern under seven alternative forest management plans...
Evaluation of theoretical and empirical water vapor sorption isotherm models for soils
NASA Astrophysics Data System (ADS)
Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.
2016-01-01
The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.
Frappier, Vincent; Najmanovich, Rafael J.
2014-01-01
Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569
Modeling the reversible, diffusive sink effect in response to transient contaminant sources.
Zhao, D; Little, J C; Hodgson, A T
2002-09-01
A physically based diffusion model is used to evaluate the sink effect of diffusion-controlled indoor materials and to predict the transient contaminant concentration in indoor air in response to several time-varying contaminant sources. For simplicity, it is assumed the predominant indoor material is a homogeneous slab, initially free of contaminant, and the air within the room is well mixed. The model enables transient volatile organic compound (VOC) concentrations to be predicted based on the material/air partition coefficient (K) and the material-phase diffusion coefficient (D) of the sink. Model predictions are made for three scenarios, each mimicking a realistic situation in a building. Styrene, phenol, and naphthalene are used as representative VOCs. A styrene butadiene rubber (SBR) backed carpet, vinyl flooring (VF), and a polyurethane foam (PUF) carpet cushion are considered as typical indoor sinks. In scenarios involving a sinusoidal VOC input and a double exponential decaying input, the model predicts the sink has a modest impact for SBR/styrene, but the effect increases for VF/phenol and PUF/naphthalene. In contrast, for an episodic chemical spill, SBR is predicted to reduce the peak styrene concentration considerably. A parametric study reveals for systems involving a large equilibrium constant (K), the kinetic constant (D) will govern the shape of the resulting gasphase concentration profile. On the other hand, for systems with a relaxed mass transfer resistance, K will dominate the profile.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Branstator, Grant
The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to externalmore » forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the art climate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of the predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.« less
Williams, Richard AJ; Peterson, A Townsend
2009-01-01
Background The emerging highly pathogenic avian influenza strain H5N1 ("HPAI-H5N1") has spread broadly in the past decade, and is now the focus of considerable concern. We tested the hypothesis that spatial distributions of HPAI-H5N1 cases are related consistently and predictably to coarse-scale environmental features in the Middle East and northeastern Africa. We used ecological niche models to relate virus occurrences to 8 km resolution digital data layers summarizing parameters of monthly surface reflectance and landform. Predictive challenges included a variety of spatial stratification schemes in which models were challenged to predict case distributions in broadly unsampled areas. Results In almost all tests, HPAI-H5N1 cases were indeed occurring under predictable sets of environmental conditions, generally predicted absent from areas with low NDVI values and minimal seasonal variation, and present in areas with a broad range of and appreciable seasonal variation in NDVI values. Although we documented significant predictive ability of our models, even between our study region and West Africa, case occurrences in the Arabian Peninsula appear to follow a distinct environmental regime. Conclusion Overall, we documented a variable environmental "fingerprint" for areas suitable for HPAI-H5N1 transmission. PMID:19619336
Stojanova, Daniela; Ceci, Michelangelo; Malerba, Donato; Dzeroski, Saso
2013-09-26
Ontologies and catalogs of gene functions, such as the Gene Ontology (GO) and MIPS-FUN, assume that functional classes are organized hierarchically, that is, general functions include more specific ones. This has recently motivated the development of several machine learning algorithms for gene function prediction that leverages on this hierarchical organization where instances may belong to multiple classes. In addition, it is possible to exploit relationships among examples, since it is plausible that related genes tend to share functional annotations. Although these relationships have been identified and extensively studied in the area of protein-protein interaction (PPI) networks, they have not received much attention in hierarchical and multi-class gene function prediction. Relations between genes introduce autocorrelation in functional annotations and violate the assumption that instances are independently and identically distributed (i.i.d.), which underlines most machine learning algorithms. Although the explicit consideration of these relations brings additional complexity to the learning process, we expect substantial benefits in predictive accuracy of learned classifiers. This article demonstrates the benefits (in terms of predictive accuracy) of considering autocorrelation in multi-class gene function prediction. We develop a tree-based algorithm for considering network autocorrelation in the setting of Hierarchical Multi-label Classification (HMC). We empirically evaluate the proposed algorithm, called NHMC (Network Hierarchical Multi-label Classification), on 12 yeast datasets using each of the MIPS-FUN and GO annotation schemes and exploiting 2 different PPI networks. The results clearly show that taking autocorrelation into account improves the predictive performance of the learned models for predicting gene function. Our newly developed method for HMC takes into account network information in the learning phase: When used for gene function prediction in the context of PPI networks, the explicit consideration of network autocorrelation increases the predictive performance of the learned models. Overall, we found that this holds for different gene features/ descriptions, functional annotation schemes, and PPI networks: Best results are achieved when the PPI network is dense and contains a large proportion of function-relevant interactions.
Heinemeyer, Andreas; Swindles, Graeme T
2018-05-08
Peatlands represent globally significant soil carbon stores that have been accumulating for millennia under water-logged conditions. However, deepening water-table depths (WTD) from climate change or human-induced drainage could stimulate decomposition resulting in peatlands turning from carbon sinks to carbon sources. Contemporary WTD ranges of testate amoebae (TA) are commonly used to predict past WTD in peatlands using quantitative transfer function models. Here we present, for the first time, a study comparing TA-based WTD reconstructions to instrumentally monitored WTD and hydrological model predictions using the MILLENNIA peatland model to examine past peatland responses to climate change and land management. Although there was very good agreement between monitored and modeled WTD, TA-reconstructed water table was consistently deeper. Predictions from a larger European TA transfer function data set were wetter, but the overall directional fit to observed WTD was better for a TA transfer function based on data from northern England. We applied a regression-based offset correction to the reconstructed WTD for the validation period (1931-2010). We then predicted WTD using available climate records as MILLENNIA model input and compared the offset-corrected TA reconstruction to MILLENNIA WTD predictions over an extended period (1750-1931) with available climate reconstructions. Although the comparison revealed striking similarities in predicted overall WTD patterns, particularly for a recent drier period (1965-1995), there were clear periods when TA-based WTD predictions underestimated (i.e. drier during 1830-1930) and overestimated (i.e. wetter during 1760-1830) past WTD compared to MILLENNIA model predictions. Importantly, simulated grouse moor management scenarios may explain the drier TA WTD predictions, resulting in considerable model predicted carbon losses and reduced methane emissions, mainly due to drainage. This study demonstrates the value of a site-specific and combined data-model validation step toward using TA-derived moisture conditions to understand past climate-driven peatland development and carbon budgets alongside modeling likely management impacts. © 2018 The Authors. Global Change Biology Published by John Wiley & Sons Ltd.
MODELING THE AMBIENT CONDITION EFFECTS OF AN AIR-COOLED NATURAL CIRCULATION SYSTEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Lisowski, Darius D.; Bucknor, Matthew
The Reactor Cavity Cooling System (RCCS) is a passive safety concept under consideration for the overall safety strategy of advanced reactors such as the High Temperature Gas-Cooled Reactor (HTGR). One such variant, air-cooled RCCS, uses natural convection to drive the flow of air from outside the reactor building to remove decay heat during normal operation and accident scenarios. The Natural convection Shutdown heat removal Test Facility (NSTF) at Argonne National Laboratory (“Argonne”) is a half-scale model of the primary features of one conceptual air-cooled RCCS design. The facility was constructed to carry out highly instrumented experiments to study the performancemore » of the RCCS concept for reactor decay heat removal that relies on natural convection cooling. Parallel modeling and simulation efforts were performed to support the design, operation, and analysis of the natural convection system. Throughout the testing program, strong influences of ambient conditions were observed in the experimental data when baseline tests were repeated under the same test procedures. Thus, significant analysis efforts were devoted to gaining a better understanding of these influences and the subsequent response of the NSTF to ambient conditions. It was determined that air humidity had negligible impacts on NSTF system performance and therefore did not warrant consideration in the models. However, temperature differences between the building exterior and interior air, along with the outside wind speed, were shown to be dominant factors. Combining the stack and wind effects together, an empirical model was developed based on theoretical considerations and using experimental data to correlate zero-power system flow rates with ambient meteorological conditions. Some coefficients in the model were obtained based on best fitting the experimental data. The predictive capability of the empirical model was demonstrated by applying it to the new set of experimental data. The empirical model was also implemented in the computational models of the NSTF using both RELAP5-3D and STARCCM+ codes. Accounting for the effects of ambient conditions, simulations from both codes predicted the natural circulation flow rates very well.« less
Cigarette smoking in a student sample: neurocognitive and clinical correlates.
Dinn, Wayne M; Aycicegi, Ayse; Harris, Catherine L
2004-01-01
Why do adolescents begin to smoke in the face of profound health risks and aggressive antismoking campaigns? The present study tested predictions based on two theoretical models of tobacco use in young adults: (1) the self-medication model; and (2) the orbitofrontal/disinhibition model. Investigators speculated that a significant number of smokers were self-medicating since nicotine possesses mood-elevating and hedonic properties. The self-medication model predicts that smokers will demonstrate increased rates of psychopathology relative to nonsmokers. Similarly, researchers have suggested that individuals with attention-deficit/hyperactivity disorder (ADHD) employ nicotine to enhance cognitive function. The ADHD/self-medication model predicts that smokers will perform poorly on tests of executive function and report a greater number of ADHD symptoms. A considerable body of research indicates that tobacco use is associated with several related personality traits including extraversion, impulsivity, risk taking, sensation seeking, novelty seeking, and antisocial personality features. Antisocial behavior and related personality traits as well as tobacco use may reflect, in part, a failure to effectively employ reward and punishment cues to guide behavior. This failure may reflect orbitofrontal dysfunction. The orbitofrontal/disinhibition model predicts that smokers will perform poorly on neurocognitive tasks considered sensitive to orbitofrontal dysfunction and will obtain significantly higher scores on measures of behavioral disinhibition and antisocial personality relative to nonsmokers. To test these predictions, we administered a battery of neuropsychological tests, clinical scales, and personality questionnaires to university student smokers and nonsmokers. Results did not support the self-medication model or the ADHD/self-medication model; however, findings were consistent with the orbitofrontal/disinhibition model.
Presence of indicator plant species as a predictor of wetland vegetation integrity
Stapanian, Martin A.; Adams, Jean V.; Gara, Brian
2013-01-01
We fit regression and classification tree models to vegetation data collected from Ohio (USA) wetlands to determine (1) which species best predict Ohio vegetation index of biotic integrity (OVIBI) score and (2) which species best predict high-quality wetlands (OVIBI score >75). The simplest regression tree model predicted OVIBI score based on the occurrence of three plant species: skunk-cabbage (Symplocarpus foetidus), cinnamon fern (Osmunda cinnamomea), and swamp rose (Rosa palustris). The lowest OVIBI scores were best predicted by the absence of the selected plant species rather than by the presence of other species. The simplest classification tree model predicted high-quality wetlands based on the occurrence of two plant species: skunk-cabbage and marsh-fern (Thelypteris palustris). The overall misclassification rate from this tree was 13 %. Again, low-quality wetlands were better predicted than high-quality wetlands by the absence of selected species rather than the presence of other species using the classification tree model. Our results suggest that a species’ wetland status classification and coefficient of conservatism are of little use in predicting wetland quality. A simple, statistically derived species checklist such as the one created in this study could be used by field biologists to quickly and efficiently identify wetland sites likely to be regulated as high-quality, and requiring more intensive field assessments. Alternatively, it can be used for advanced determinations of low-quality wetlands. Agencies can save considerable money by screening wetlands for the presence/absence of such “indicator” species before issuing permits.
Orientation-dependent integral equation theory for a two-dimensional model of water
NASA Astrophysics Data System (ADS)
Urbič, T.; Vlachy, V.; Kalyuzhnyi, Yu. V.; Dill, K. A.
2003-03-01
We develop an integral equation theory that applies to strongly associating orientation-dependent liquids, such as water. In an earlier treatment, we developed a Wertheim integral equation theory (IET) that we tested against NPT Monte Carlo simulations of the two-dimensional Mercedes Benz model of water. The main approximation in the earlier calculation was an orientational averaging in the multidensity Ornstein-Zernike equation. Here we improve the theory by explicit introduction of an orientation dependence in the IET, based upon expanding the two-particle angular correlation function in orthogonal basis functions. We find that the new orientation-dependent IET (ODIET) yields a considerable improvement of the predicted structure of water, when compared to the Monte Carlo simulations. In particular, ODIET predicts more long-range order than the original IET, with hexagonal symmetry, as expected for the hydrogen bonded ice in this model. The new theoretical approximation still errs in some subtle properties; for example, it does not predict liquid water's density maximum with temperature or the negative thermal expansion coefficient.
Hastings, K L
2001-02-02
Immune-based systemic hypersensitivities account for a significant number of adverse drug reactions. There appear to be no adequate nonclinical models to predict systemic hypersensitivity to small molecular weight drugs. Although there are very good methods for detecting drugs that can induce contact sensitization, these have not been successfully adapted for prediction of systemic hypersensitivity. Several factors have made the development of adequate models difficult. The term systemic hypersensitivity encompases many discrete immunopathologies. Each type of immunopathology presumably is the result of a specific cluster of immunologic and biochemical phenomena. Certainly other factors, such as genetic predisposition, metabolic idiosyncrasies, and concomitant diseases, further complicate the problem. Therefore, it may be difficult to find common mechanisms upon which to construct adequate models to predict specific types of systemic hypersensitivity reactions. There is some reason to hope, however, that adequate methods could be developed for at least identifying drugs that have the potential to produce signs indicative of a general hazard for immune-based reactions.
Computational substrates of social value in interpersonal collaboration.
Fareri, Dominic S; Chang, Luke J; Delgado, Mauricio R
2015-05-27
Decisions to engage in collaborative interactions require enduring considerable risk, yet provide the foundation for building and maintaining relationships. Here, we investigate the mechanisms underlying this process and test a computational model of social value to predict collaborative decision making. Twenty-six participants played an iterated trust game and chose to invest more frequently with their friends compared with a confederate or computer despite equal reinforcement rates. This behavior was predicted by our model, which posits that people receive a social value reward signal from reciprocation of collaborative decisions conditional on the closeness of the relationship. This social value signal was associated with increased activity in the ventral striatum and medial prefrontal cortex, which significantly predicted the reward parameters from the social value model. Therefore, we demonstrate that the computation of social value drives collaborative behavior in repeated interactions and provide a mechanistic account of reward circuit function instantiating this process. Copyright © 2015 the authors 0270-6474/15/358170-11$15.00/0.
Johnson, David R.
2014-01-01
Prior research indicates a negative relationship between women’s labor force participation and fertility at the individual level in the United States, but little is known about the reasons for this relationship beyond work hours. We employed discrete event history models using panel data from the National Survey of Families and Households (N = 2,411) and found that the importance of career considerations mediates the work hours/fertility relationship. Further, fertility intentions and the importance of career considerations were more predictive of birth outcomes as women’s work hours increase. Ultimately, our findings challenge the assumption that working more hours is the direct cause for employed women having fewer children and highlight the importance of career and fertility preferences in fertility outcomes. PMID:25506189
NASA Astrophysics Data System (ADS)
Zubov, N. O.; Kaban'kov, O. N.; Yagov, V. V.; Sukomel, L. A.
2017-12-01
Wide use of natural circulation loops operating at low redused pressures generates the real need to develop reliable methods for predicting flow regimes and friction pressure drop for two-phase flows in this region of parameters. Although water-air flows at close-to-atmospheric pressures are the most widely studied subject in the field of two-phase hydrodynamics, the problem of reliably calculating friction pressure drop can hardly be regarded to have been fully solved. The specific volumes of liquid differ very much from those of steam (gas) under such conditions, due to which even a small change in flow quality may cause the flow pattern to alter very significantly. Frequently made attempts to use some or another universal approach to calculating friction pressure drop in a wide range of steam quality values do not seem to be justified and yield predicted values that are poorly consistent with experimentally measured data. The article analyzes the existing methods used to calculate friction pressure drop for two-phase flows at low pressures by comparing their results with the experimentally obtained data. The advisability of elaborating calculation procedures for determining the friction pressure drop and void fraction for two-phase flows taking their pattern (flow regime) into account is demonstrated. It is shown that, for flows characterized by low reduced pressures, satisfactory results are obtained from using a homogeneous model for quasi-homogeneous flows, whereas satisfactory results are obtained from using an annular flow model for flows characterized by high values of void fraction. Recommendations for making a shift from one model to another in carrying out engineering calculations are formulated and tested. By using the modified annular flow model, it is possible to obtain reliable predictions for not only the pressure gradient but also for the liquid film thickness; the consideration of droplet entrainment and deposition phenomena allows reasonable corrections to be introduced into calculations. To the best of the authors' knowledge, it is for the first time that the entrainment of droplets from the film surface is taken into consideration in the dispersed-annular flow model.
Smith, Morgan E; Singh, Brajendra K; Irvine, Michael A; Stolk, Wilma A; Subramanian, Swaminathan; Hollingsworth, T Déirdre; Michael, Edwin
2017-03-01
Mathematical models of parasite transmission provide powerful tools for assessing the impacts of interventions. Owing to complexity and uncertainty, no single model may capture all features of transmission and elimination dynamics. Multi-model ensemble modelling offers a framework to help overcome biases of single models. We report on the development of a first multi-model ensemble of three lymphatic filariasis (LF) models (EPIFIL, LYMFASIM, and TRANSFIL), and evaluate its predictive performance in comparison with that of the constituents using calibration and validation data from three case study sites, one each from the three major LF endemic regions: Africa, Southeast Asia and Papua New Guinea (PNG). We assessed the performance of the respective models for predicting the outcomes of annual MDA strategies for various baseline scenarios thought to exemplify the current endemic conditions in the three regions. The results show that the constructed multi-model ensemble outperformed the single models when evaluated across all sites. Single models that best fitted calibration data tended to do less well in simulating the out-of-sample, or validation, intervention data. Scenario modelling results demonstrate that the multi-model ensemble is able to compensate for variance between single models in order to produce more plausible predictions of intervention impacts. Our results highlight the value of an ensemble approach to modelling parasite control dynamics. However, its optimal use will require further methodological improvements as well as consideration of the organizational mechanisms required to ensure that modelling results and data are shared effectively between all stakeholders. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Effect of climate change on shoreline shifts at a straight and continuous coast
NASA Astrophysics Data System (ADS)
Rajasree, B. R.; Deo, M. C.; Sheela Nair, L.
2016-12-01
The prediction of the rate of shoreline shifts as well as that of erosion and accretion over future at a given location is traditionally done on the basis of analysis of past wave data. However under the changing climate affected by global warming it is better done considering the projected wave conditions over the future. The same is demonstrated in this work with respect to a stretch of coastline at 'Udupi' along the west coast of India. The shoreline changes in the past are first determined with the help of historic satellite images. A numerical shoreline model is later run on the basis of wave simulations of past 35 years as well as future 35 years. The latter wave conditions are obtained from wind projections corresponding to a high resolution regional climate model run for a moderate pathway of global warming. Alternatively prediction of the changes over future 35 years is also made by using the soft computing tool of artificial neural network (ANN) trained with the help of past satellite images. The results indicate that the area under consideration presently undergoes considerable erosion and this process will accelerate in future. The volume of annual sediment transport will also substantially increase over the future. The alternative computations made with the help of an ANN confirmed the future rising trend of erosion, albeit at smaller rate than the numerically predicted one.
Farrell, Tracy L; Poquet, Laure; Dew, Tristan P; Barber, Stuart; Williamson, Gary
2012-02-01
There is a considerable need to rationalize the membrane permeability and mechanism of transport for potential nutraceuticals. The aim of this investigation was to develop a theoretical permeability equation, based on a reported descriptive absorption model, enabling calculation of the transcellular component of absorption across Caco-2 monolayers. Published data for Caco-2 permeability of 30 drugs transported by the transcellular route were correlated with the descriptors 1-octanol/water distribution coefficient (log D, pH 7.4) and size, based on molecular mass. Nonlinear regression analysis was used to derive a set of model parameters a', β', and b' with an integrated molecular mass function. The new theoretical transcellular permeability (TTP) model obtained a good fit of the published data (R² = 0.93) and predicted reasonably well (R² = 0.86) the experimental apparent permeability coefficient (P(app)) for nine non-training set compounds reportedly transported by the transcellular route. For the first time, the TTP model was used to predict the absorption characteristics of six phenolic acids, and this original investigation was supported by in vitro Caco-2 cell mechanistic studies, which suggested that deviation of the P(app) value from the predicted transcellular permeability (P(app)(trans)) may be attributed to involvement of active uptake, efflux transporters, or paracellular flux.
An analytics approach to designing patient centered medical homes.
Ajorlou, Saeede; Shams, Issac; Yang, Kai
2015-03-01
Recently the patient centered medical home (PCMH) model has become a popular team based approach focused on delivering more streamlined care to patients. In current practices of medical homes, a clinical based prediction frame is recommended because it can help match the portfolio capacity of PCMH teams with the actual load generated by a set of patients. Without such balances in clinical supply and demand, issues such as excessive under and over utilization of physicians, long waiting time for receiving the appropriate treatment, and non-continuity of care will eliminate many advantages of the medical home strategy. In this paper, by using the hierarchical generalized linear model with multivariate responses, we develop a clinical workload prediction model for care portfolio demands in a Bayesian framework. The model allows for heterogeneous variances and unstructured covariance matrices for nested random effects that arise through complex hierarchical care systems. We show that using a multivariate approach substantially enhances the precision of workload predictions at both primary and non primary care levels. We also demonstrate that care demands depend not only on patient demographics but also on other utilization factors, such as length of stay. Our analyses of a recent data from Veteran Health Administration further indicate that risk adjustment for patient health conditions can considerably improve the prediction power of the model.
The effect of soot modeling on thermal radiation in buoyant turbulent diffusion flames
NASA Astrophysics Data System (ADS)
Snegirev, A.; Kokovina, E.; Tsoy, A.; Harris, J.; Wu, T.
2016-09-01
Radiative impact of buoyant turbulent diffusion flames is the driving force in fire development. Radiation emission and re-absorption is controlled by gaseous combustion products, mainly CO2 and H2O, and by soot. Relative contribution of gas and soot radiation depends on the fuel sooting propensity and on soot distribution in the flame. Soot modeling approaches incorporated in big commercial codes were developed and calibrated for momentum-dominated jet flames, and these approaches must be re-evaluated when applied to the buoyant flames occurring in fires. The purpose of this work is to evaluate the effect of the soot models available in ANSYS FLUENT on the predictions of the radiative fluxes produced by the buoyant turbulent diffusion flames with considerably different soot yields. By means of large eddy simulations, we assess capability of the Moss-Brooks soot formation model combined with two soot oxidation submodels to predict methane- and heptane-fuelled fires, for which radiative flux measurements are available in the literature. We demonstrate that the soot oxidation models could be equally important as soot formation ones to predict the soot yield in the overfire region. Contribution of soot in the radiation emission by the flame is also examined, and predicted radiative fluxes are compared to published experimental data.
Clerkin, Elise M; Teachman, Bethany A
2009-08-01
The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n = 32) or low (n = 31) BDD symptoms. Specifically, we examined the extent that (1) explicit interpretations tied to appearance, as well as (2) automatic associations and (3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, whereas strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures.
Clerkin, Elise M.; Teachman, Bethany A.
2011-01-01
The current study tests cognitive-behavioral models of body dysmorphic disorder (BDD) by examining the relationship between cognitive biases and correlates of mirror gazing. To provide a more comprehensive picture, we investigated both relatively strategic (i.e., available for conscious introspection) and automatic (i.e., outside conscious control) measures of cognitive biases in a sample with either high (n=32) or low (n=31) BDD symptoms. Specifically, we examined the extent that 1) explicit interpretations tied to appearance, as well as 2) automatic associations and 3) strategic evaluations of the importance of attractiveness predict anxiety and avoidance associated with mirror gazing. Results indicated that interpretations tied to appearance uniquely predicted self-reported desire to avoid, while strategic evaluations of appearance uniquely predicted peak anxiety associated with mirror gazing, and automatic appearance associations uniquely predicted behavioral avoidance. These results offer considerable support for cognitive models of BDD, and suggest a dissociation between automatic and strategic measures. PMID:19684496
Liu, Changhong; Liu, Wei; Chen, Wei; Yang, Jianbo; Zheng, Lei
2015-04-15
Tomato is an important health-stimulating fruit because of the antioxidant properties of its main bioactive compounds, dominantly lycopene and phenolic compounds. Nowadays, product differentiation in the fruit market requires an accurate evaluation of these value-added compounds. An experiment was conducted to simultaneously and non-destructively measure lycopene and phenolic compounds content in intact tomatoes using multispectral imaging combined with chemometric methods. Partial least squares (PLS), least squares-support vector machines (LS-SVM) and back propagation neural network (BPNN) were applied to develop quantitative models. Compared with PLS and LS-SVM, BPNN model considerably improved the performance with coefficient of determination in prediction (RP(2))=0.938 and 0.965, residual predictive deviation (RPD)=4.590 and 9.335 for lycopene and total phenolics content prediction, respectively. It is concluded that multispectral imaging is an attractive alternative to the standard methods for determination of bioactive compounds content in intact tomatoes, providing a useful platform for infield fruit sorting/grading. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mishra, H; Polak, S; Jamei, M; Rostami-Hodjegan, A
2014-01-01
We aimed to investigate the application of combined mechanistic pharmacokinetic (PK) and pharmacodynamic (PD) modeling and simulation in predicting the domperidone (DOM) triggered pseudo-electrocardiogram modification in the presence of a CYP3A inhibitor, ketoconazole (KETO), using in vitro–in vivo extrapolation. In vitro metabolic and inhibitory data were incorporated into physiologically based pharmacokinetic (PBPK) models within Simcyp to simulate time course of plasma DOM and KETO concentrations when administered alone or in combination with KETO (DOM+KETO). Simulated DOM concentrations in plasma were used to predict changes in gender-specific QTcF (Fridericia correction) intervals within the Cardiac Safety Simulator platform taking into consideration DOM, KETO, and DOM+KETO triggered inhibition of multiple ionic currents in population. Combination of in vitro–in vivo extrapolation, PBPK, and systems pharmacology of electric currents in the heart was able to predict the direction and magnitude of PK and PD changes under coadministration of the two drugs although some disparities were detected. PMID:25116274
Evaporation residue cross-section measurements for 48Ti-induced reactions
NASA Astrophysics Data System (ADS)
Sharma, Priya; Behera, B. R.; Mahajan, Ruchi; Thakur, Meenu; Kaur, Gurpreet; Kapoor, Kushal; Rani, Kavita; Madhavan, N.; Nath, S.; Gehlot, J.; Dubey, R.; Mazumdar, I.; Patel, S. M.; Dhibar, M.; Hosamani, M. M.; Khushboo, Kumar, Neeraj; Shamlath, A.; Mohanto, G.; Pal, Santanu
2017-09-01
Background: A significant research effort is currently aimed at understanding the synthesis of heavy elements. For this purpose, heavy ion induced fusion reactions are used and various experimental observations have indicated the influence of shell and deformation effects in the compound nucleus (CN) formation. There is a need to understand these two effects. Purpose: To investigate the effect of proton shell closure and deformation through the comparison of evaporation residue (ER) cross sections for the systems involving heavy compound nuclei around the ZCN=82 region. Methods: A systematic study of ER cross-section measurements was carried out for the 48Ti+Nd,150142 , 144Sm systems in the energy range of 140 -205 MeV . The measurement has been performed using the gas-filled mode of the hybrid recoil mass analyzer present at the Inter University Accelerator Centre (IUAC), New Delhi. Theoretical calculations based on a statistical model were carried out incorporating an adjustable barrier scaling factor to fit the experimental ER cross section. Coupled-channel calculations were also performed using the ccfull code to obtain the spin distribution of the CN, which was used as an input in the calculations. Results: Experimental ER cross sections for 48Ti+Nd,150142 were found to be considerably smaller than the statistical model predictions whereas experimental and statistical model predictions for 48Ti+144Sm were of comparable magnitudes. Conclusion: Though comparison of experimental ER cross sections with statistical model predictions indicate considerable non-compound-nuclear processes for 48Ti+Nd,150142 reactions, no such evidence is found for the 48Ti+144Sm system. Further investigations are required to understand the difference in fusion probabilities of 48Ti+142Nd and 48Ti+144Sm systems.
Acoustical properties of a model rotor in nonaxial flight. [wind tunnel model noise measurements
NASA Technical Reports Server (NTRS)
Hinterkeuser, E. G.
1973-01-01
Wind tunnel measurements on model rotor blade loads and acoustical noise were correlated to a theoretical formulation of the rotational noise of a rotor in non-axial flight. Good correlation between theory and data was achieved using actual measured rotor blade pressure harmonic decay levels and lift, drag and radial force magnitudes. Both pressure and acoustic data exhibited considerable scatter in hover and low speed forward flight which resulted in a fairly wide latitude in the noise level prediction at higher harmonics.
NASA Astrophysics Data System (ADS)
Zhu, Fanglong; Zhou, Yu; Liu, Suyan
2013-10-01
In this paper, we propose a new fractal model to determine the moisture effective diffusivity of porous membrane such as expanded polytetrafluorethylene membrane, by taking account of both parallel and perpendicular channels to diffusion flow direction. With the consideration of both the Knudsen and bulk diffusion effect, a relationship between micro-structural parameters and effective moisture diffusivity is deduced. The effective moisture diffusivities predicted by the present fractal model are compared with moisture diffusion experiment data and calculated values obtained from other theoretical models.
Optical and X-ray radiation from fast pulsars - Effects of duty cycle and spectral shape
NASA Technical Reports Server (NTRS)
Pacini, F.; Salvati, M.
1987-01-01
The optical luminosity of PSR 0540 is considerably stronger than what one would have predicted in a simple model developed earlier where the pulses are synchrotron radiation by secondary electrons near the light cylinder. This discrepancy can be eliminated if one incorporates into the model the effects of the large duty cycle and the spectral properties of PSR 0540. It is also shown that the same model can provide a reasonable fit to the observed X-ray fluxes from fast pulsars.
NASA Astrophysics Data System (ADS)
Peng, Lanfang; Liu, Paiyu; Feng, Xionghan; Wang, Zimeng; Cheng, Tao; Liang, Yuzhen; Lin, Zhang; Shi, Zhenqing
2018-03-01
Predicting the kinetics of heavy metal adsorption and desorption in soil requires consideration of multiple heterogeneous soil binding sites and variations of reaction chemistry conditions. Although chemical speciation models have been developed for predicting the equilibrium of metal adsorption on soil organic matter (SOM) and important mineral phases (e.g. Fe and Al (hydr)oxides), there is still a lack of modeling tools for predicting the kinetics of metal adsorption and desorption reactions in soil. In this study, we developed a unified model for the kinetics of heavy metal adsorption and desorption in soil based on the equilibrium models WHAM 7 and CD-MUSIC, which specifically consider metal kinetic reactions with multiple binding sites of SOM and soil minerals simultaneously. For each specific binding site, metal adsorption and desorption rate coefficients were constrained by the local equilibrium partition coefficients predicted by WHAM 7 or CD-MUSIC, and, for each metal, the desorption rate coefficients of various binding sites were constrained by their metal binding constants with those sites. The model had only one fitting parameter for each soil binding phase, and all other parameters were derived from WHAM 7 and CD-MUSIC. A stirred-flow method was used to study the kinetics of Cd, Cu, Ni, Pb, and Zn adsorption and desorption in multiple soils under various pH and metal concentrations, and the model successfully reproduced most of the kinetic data. We quantitatively elucidated the significance of different soil components and important soil binding sites during the adsorption and desorption kinetic processes. Our model has provided a theoretical framework to predict metal adsorption and desorption kinetics, which can be further used to predict the dynamic behavior of heavy metals in soil under various natural conditions by coupling other important soil processes.
NASA Astrophysics Data System (ADS)
Touhidul Mustafa, Syed Md.; Nossent, Jiri; Ghysels, Gert; Huysmans, Marijke
2017-04-01
Transient numerical groundwater flow models have been used to understand and forecast groundwater flow systems under anthropogenic and climatic effects, but the reliability of the predictions is strongly influenced by different sources of uncertainty. Hence, researchers in hydrological sciences are developing and applying methods for uncertainty quantification. Nevertheless, spatially distributed flow models pose significant challenges for parameter and spatially distributed input estimation and uncertainty quantification. In this study, we present a general and flexible approach for input and parameter estimation and uncertainty analysis of groundwater models. The proposed approach combines a fully distributed groundwater flow model (MODFLOW) with the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm. To avoid over-parameterization, the uncertainty of the spatially distributed model input has been represented by multipliers. The posterior distributions of these multipliers and the regular model parameters were estimated using DREAM. The proposed methodology has been applied in an overexploited aquifer in Bangladesh where groundwater pumping and recharge data are highly uncertain. The results confirm that input uncertainty does have a considerable effect on the model predictions and parameter distributions. Additionally, our approach also provides a new way to optimize the spatially distributed recharge and pumping data along with the parameter values under uncertain input conditions. It can be concluded from our approach that considering model input uncertainty along with parameter uncertainty is important for obtaining realistic model predictions and a correct estimation of the uncertainty bounds.
NASA Astrophysics Data System (ADS)
Wichmann, Matthias C.; Groeneveld, Jürgen; Jeltsch, Florian; Grimm, Volker
2005-07-01
The predicted climate change causes deep concerns on the effects of increasing temperatures and changing precipitation patterns on species viability and, in turn, on biodiversity. Models of Population Viability Analysis (PVA) provide a powerful tool to assess the risk of species extinction. However, most PVA models do not take into account the potential effects of behavioural adaptations. Organisms might adapt to new environmental situations and thereby mitigate negative effects of climate change. To demonstrate such mitigation effects, we use an existing PVA model describing a population of the tawny eagle ( Aquila rapax) in the southern Kalahari. This model does not include behavioural adaptations. We develop a new model by assuming that the birds enlarge their average territory size to compensate for lower amounts of precipitation. Here, we found the predicted increase in risk of extinction due to climate change to be much lower than in the original model. However, this "buffering" of climate change by behavioural adaptation is not very effective in coping with increasing interannual variances. We refer to further examples of ecological "buffering mechanisms" from the literature and argue that possible buffering mechanisms should be given due consideration when the effects of climate change on biodiversity are to be predicted.
X-rays from the colliding wind binary WR 146
NASA Astrophysics Data System (ADS)
Zhekov, Svetozar A.
2017-12-01
The X-ray emission from the massive Wolf-Rayet binary (WR 146 ) is analysed in the framework of the colliding stellar wind (CSW) picture. The theoretical CSW model spectra match well the shape of the observed X-ray spectrum of WR 146, but they overestimate considerably the observed X-ray flux (emission measure). This is valid in the case of both complete temperature equalization and partial electron heating at the shock fronts (different electron and ion temperatures), but there are indications for a better correspondence between model predictions and observations for the latter. To reconcile the model predictions and observations, the mass-loss rate of WR 146 must be reduced by a factor of 8-10 compared to the currently accepted value for this object (the latter already takes clumping into account). No excess X-ray absorption is derived from the CSW modelling.
Pseudomonas aeruginosa dose response and bathing water infection.
Roser, D J; van den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A
2014-03-01
Pseudomonas aeruginosa is the opportunistic pathogen mostly implicated in folliculitis and acute otitis externa in pools and hot tubs. Nevertheless, infection risks remain poorly quantified. This paper reviews disease aetiologies and bacterial skin colonization science to advance dose-response theory development. Three model forms are identified for predicting disease likelihood from pathogen density. Two are based on Furumoto & Mickey's exponential 'single-hit' model and predict infection likelihood and severity (lesions/m2), respectively. 'Third-generation', mechanistic, dose-response algorithm development is additionally scoped. The proposed formulation integrates dispersion, epidermal interaction, and follicle invasion. The review also details uncertainties needing consideration which pertain to water quality, outbreaks, exposure time, infection sites, biofilms, cerumen, environmental factors (e.g. skin saturation, hydrodynamics), and whether P. aeruginosa is endogenous or exogenous. The review's findings are used to propose a conceptual infection model and identify research priorities including pool dose-response modelling, epidermis ecology and infection likelihood-based hygiene management.
Aboulfotoh, Ahmed M
2018-03-01
Performance of continuous mesophilic high solids anaerobic digestion (HSAD) was simulated using Anaerobic Digestion Model No. 1 (ADM1), under different conditions (solids concentrations, sludge retention time (SRT), organic loading rate (OLR), and type of sludge). Implementation of ADM1, using the proposed biochemical parameters, proved to be a useful tool for the prediction and control of HSAD as the model predicted the behavior of the tested sets of data with considerable accuracy, especially for SRT more than 13 days. The model was then used to investigate the possibility of changing the existing conventional anaerobic digestion (CAD) units in Gabal El Asfar water resource recovery facility into HSAD, instead of establishing new CAD units, and results show that the system will be feasible. HSAD will produce the same bioenergy combined with a decrease in capital, operational, and maintenance costs.
Second-order near-wall turbulence closures - A review
NASA Technical Reports Server (NTRS)
So, R. M. C.; Lai, Y. G.; Zhang, H. S.; Hwang, B. C.
1991-01-01
Advances in second-order near-wall turbulence closures are summarized. All closures under consideration are based on high-Reynolds-number models. Most near-wall closures proposed to date attempt to modify the high-Reynolds-number models for the dissipation function and the pressure redistribution term so that the resultant models are applicable all the way to the wall. The asymptotic behavior of the near-wall closures is examined and compared with the proper near-wall behavior of the exact Reynolds-stress equations. It is found that three second-order near-wall closures give the best correlations with simulated turbulence statistics. However, their predictions of near-wall Reynolds-stress budgets are considered to be incorrect. A proposed modification to the dissipitation-rate equation remedies part of those predictions. It is concluded that further improvements are required if a complete replication of all the turbulence properties and Reynolds-stress budgets by a statistical model of turbulence is desirable.
Status of Computational Aerodynamic Modeling Tools for Aircraft Loss-of-Control
NASA Technical Reports Server (NTRS)
Frink, Neal T.; Murphy, Patrick C.; Atkins, Harold L.; Viken, Sally A.; Petrilli, Justin L.; Gopalarathnam, Ashok; Paul, Ryan C.
2016-01-01
A concerted effort has been underway over the past several years to evolve computational capabilities for modeling aircraft loss-of-control under the NASA Aviation Safety Program. A principal goal has been to develop reliable computational tools for predicting and analyzing the non-linear stability & control characteristics of aircraft near stall boundaries affecting safe flight, and for utilizing those predictions for creating augmented flight simulation models that improve pilot training. Pursuing such an ambitious task with limited resources required the forging of close collaborative relationships with a diverse body of computational aerodynamicists and flight simulation experts to leverage their respective research efforts into the creation of NASA tools to meet this goal. Considerable progress has been made and work remains to be done. This paper summarizes the status of the NASA effort to establish computational capabilities for modeling aircraft loss-of-control and offers recommendations for future work.
Cord, Maximilien; Sirjean, Baptiste; Fournet, René; Tomlin, Alison; Ruiz-Lopez, Manuel; Battin-Leclerc, Frédérique
2012-06-21
This paper revisits the primary reactions involved in the oxidation of n-butane from low to intermediate temperatures (550-800 K) including the negative temperature coefficient (NTC) zone. A model that was automatically generated is used as a starting point and a large number of thermochemical and kinetic data are then re-estimated. The kinetic data of the isomerization of alkylperoxy radicals giving (•)QOOH radicals and the subsequent decomposition to give cyclic ethers has been calculated at the CBS-QB3 level of theory. The newly obtained model allows a satisfactory prediction of experimental data recently obtained in a jet-stirred reactor and in rapid compression machines. A considerable improvement of the prediction of the selectivity of cyclic ethers is especially obtained compared to previous models. Linear and global sensitivity analyses have been performed to better understand which reactions are of influence in the NTC zone.
Predictability of the Lagrangian Motion in the Upper Ocean
NASA Astrophysics Data System (ADS)
Piterbarg, L. I.; Griffa, A.; Griffa, A.; Mariano, A. J.; Ozgokmen, T. M.; Ryan, E. H.
2001-12-01
The complex non-linear dynamics of the upper ocean leads to chaotic behavior of drifter trajectories in the ocean. Our study is focused on estimating the predictability limit for the position of an individual Lagrangian particle or a particle cluster based on the knowledge of mean currents and observations of nearby particles (predictors). The Lagrangian prediction problem, besides being a fundamental scientific problem, is also of great importance for practical applications such as search and rescue operations and for modeling the spread of fish larvae. A stochastic multi-particle model for the Lagrangian motion has been rigorously formulated and is a generalization of the well known "random flight" model for a single particle. Our model is mathematically consistent and includes a few easily interpreted parameters, such as the Lagrangian velocity decorrelation time scale, the turbulent velocity variance, and the velocity decorrelation radius, that can be estimated from data. The top Lyapunov exponent for an isotropic version of the model is explicitly expressed as a function of these parameters enabling us to approximate the predictability limit to first order. Lagrangian prediction errors for two new prediction algorithms are evaluated against simple algorithms and each other and are used to test the predictability limits of the stochastic model for isotropic turbulence. The first algorithm is based on a Kalman filter and uses the developed stochastic model. Its implementation for drifter clusters in both the Tropical Pacific and Adriatic Sea, showed good prediction skill over a period of 1-2 weeks. The prediction error is primarily a function of the data density, defined as the number of predictors within a velocity decorrelation spatial scale from the particle to be predicted. The second algorithm is model independent and is based on spatial regression considerations. Preliminary results, based on simulated, as well as, real data, indicate that it performs better than the Kalman-based algorithm in strong shear flows. An important component of our research is the optimal predictor location problem; Where should floats be launched in order to minimize the Lagrangian prediction error? Preliminary Lagrangian sampling results for different flow scenarios will be presented.
Dissimilarity based Partial Least Squares (DPLS) for genomic prediction from SNPs.
Singh, Priyanka; Engel, Jasper; Jansen, Jeroen; de Haan, Jorn; Buydens, Lutgarde Maria Celina
2016-05-04
Genomic prediction (GP) allows breeders to select plants and animals based on their breeding potential for desirable traits, without lengthy and expensive field trials or progeny testing. We have proposed to use Dissimilarity-based Partial Least Squares (DPLS) for GP. As a case study, we use the DPLS approach to predict Bacterial wilt (BW) in tomatoes using SNPs as predictors. The DPLS approach was compared with the Genomic Best-Linear Unbiased Prediction (GBLUP) and single-SNP regression with SNP as a fixed effect to assess the performance of DPLS. Eight genomic distance measures were used to quantify relationships between the tomato accessions from the SNPs. Subsequently, each of these distance measures was used to predict the BW using the DPLS prediction model. The DPLS model was found to be robust to the choice of distance measures; similar prediction performances were obtained for each distance measure. DPLS greatly outperformed the single-SNP regression approach, showing that BW is a comprehensive trait dependent on several loci. Next, the performance of the DPLS model was compared to that of GBLUP. Although GBLUP and DPLS are conceptually very different, the prediction quality (PQ) measured by DPLS models were similar to the prediction statistics obtained from GBLUP. A considerable advantage of DPLS is that the genotype-phenotype relationship can easily be visualized in a 2-D scatter plot. This so-called score-plot provides breeders an insight to select candidates for their future breeding program. DPLS is a highly appropriate method for GP. The model prediction performance was similar to the GBLUP and far better than the single-SNP approach. The proposed method can be used in combination with a wide range of genomic dissimilarity measures and genotype representations such as allele-count, haplotypes or allele-intensity values. Additionally, the data can be insightfully visualized by the DPLS model, allowing for selection of desirable candidates from the breeding experiments. In this study, we have assessed the DPLS performance on a single trait.
Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates
Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn
2016-01-01
Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.
Use of chaos theory and complex systems modeling to study alcohol effects on fetal condition.
Mehl, L E; Manchanda, S
1993-10-01
A systems dynamics computer model to predict birth complications for individual pregnant woman was developed from prospectively conducted data on a database of 125 pregnant women. The model is based upon nonlinear mathematics derived from the study of chaos and complex systems. The model was then tested prospectively on 27 additional pregnant women, making predictions on their level of obstetrical risk. The model was refined until it correctly predicted the outcomes of all 125 cases in the development database. Prediction was made with an accuracy of 25/27 cases for the prospective test cases. Predictions were made for fetal condition at birth, presence or absence of operative delivery, and presence or absence of uterine dysfunction. Then the model was used to explore alcohol use during pregnancy. A reasonable spread of alcohol use existed among subjects, allowing consideration of alcohol effects. Alcohol was found to have differential effects on fetal condition at birth depending upon the presence or absence of high levels of psychosocial stress and the use of other substances. In all cases, the effect of alcohol was only evident after the 10 drinks per week level was reached. For the high-stress/one other substance group, there could be an 18-fold effect on fetal condition at birth. For the low-stress/one other substance group, the effect was only 3-fold, and for the alcohol alone group, the effect was negligible.
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol; Jang, Huisu
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction.
Pyo, Sujin; Lee, Jaewook; Cha, Mincheol
2017-01-01
The prediction of the trends of stocks and index prices is one of the important issues to market participants. Investors have set trading or fiscal strategies based on the trends, and considerable research in various academic fields has been studied to forecast financial markets. This study predicts the trends of the Korea Composite Stock Price Index 200 (KOSPI 200) prices using nonparametric machine learning models: artificial neural network, support vector machines with polynomial and radial basis function kernels. In addition, this study states controversial issues and tests hypotheses about the issues. Accordingly, our results are inconsistent with those of the precedent research, which are generally considered to have high prediction performance. Moreover, Google Trends proved that they are not effective factors in predicting the KOSPI 200 index prices in our frameworks. Furthermore, the ensemble methods did not improve the accuracy of the prediction. PMID:29136004
Internet-based system for simulation-based medical planning for cardiovascular disease.
Steele, Brooke N; Draney, Mary T; Ku, Joy P; Taylor, Charles A
2003-06-01
Current practice in vascular surgery utilizes only diagnostic and empirical data to plan treatments, which does not enable quantitative a priori prediction of the outcomes of interventions. We have previously described simulation-based medical planning methods to model blood flow in arteries and plan medical treatments based on physiologic models. An important consideration for the design of these patient-specific modeling systems is the accessibility to physicians with modest computational resources. We describe a simulation-based medical planning environment developed for the World Wide Web (WWW) using the Virtual Reality Modeling Language (VRML) and the Java programming language.
Modeling Scramjet Flows with Variable Turbulent Prandtl and Schmidt Numbers
NASA Technical Reports Server (NTRS)
Xiao, X.; Hassan, H. A.; Baurle, R. A.
2006-01-01
A complete turbulence model, where the turbulent Prandtl and Schmidt numbers are calculated as part of the solution and where averages involving chemical source terms are modeled, is presented. The ability of avoiding the use of assumed or evolution Probability Distribution Functions (PDF's) results in a highly efficient algorithm for reacting flows. The predictions of the model are compared with two sets of experiments involving supersonic mixing and one involving supersonic combustion. The results demonstrate the need for consideration of turbulence/chemistry interactions in supersonic combustion. In general, good agreement with experiment is indicated.
In silico prediction of drug therapy in catecholaminergic polymorphic ventricular tachycardia
Yang, Pei‐Chi; Moreno, Jonathan D.; Miyake, Christina Y.; Vaughn‐Behrens, Steven B.; Jeng, Mao‐Tsuen; Grandi, Eleonora; Wehrens, Xander H. T.; Noskov, Sergei Y.
2016-01-01
Key points The mechanism of therapeutic efficacy of flecainide for catecholaminergic polymorphic ventricular tachycardia (CPVT) is unclear.Model predictions suggest that Na+ channel effects are insufficient to explain flecainide efficacy in CPVT.This study represents a first step toward predicting therapeutic mechanisms of drug efficacy in the setting of CPVT and then using these mechanisms to guide modelling and simulation to predict alternative drug therapies. Abstract Catecholaminergic polymorphic ventricular tachycardia (CPVT) is an inherited arrhythmia syndrome characterized by fatal ventricular arrhythmias in structurally normal hearts during β‐adrenergic stimulation. Current treatment strategies include β‐blockade, flecainide and ICD implementation – none of which is fully effective and each comes with associated risk. Recently, flecainide has gained considerable interest in CPVT treatment, but its mechanism of action for therapeutic efficacy is unclear. In this study, we performed in silico mutagenesis to construct a CPVT model and then used a computational modelling and simulation approach to make predictions of drug mechanisms and efficacy in the setting of CPVT. Experiments were carried out to validate model results. Our simulations revealed that Na+ channel effects are insufficient to explain flecainide efficacy in CPVT. The pure Na+ channel blocker lidocaine and the antianginal ranolazine were additionally tested and also found to be ineffective. When we tested lower dose combination therapy with flecainide, β‐blockade and CaMKII inhibition, our model predicted superior therapeutic efficacy than with flecainide monotherapy. Simulations indicate a polytherapeutic approach may mitigate side‐effects and proarrhythmic potential plaguing CPVT pharmacological management today. Importantly, our prediction of a novel polytherapy for CPVT was confirmed experimentally. Our simulations suggest that flecainide therapeutic efficacy in CPVT is unlikely to derive from primary interactions with the Na+ channel, and benefit may be gained from an alternative multi‐drug regimen. PMID:26515697
Amuzu-Aweh, E N; Bijma, P; Kinghorn, B P; Vereijken, A; Visscher, J; van Arendonk, J Am; Bovenhuis, H
2013-12-01
Prediction of heterosis has a long history with mixed success, partly due to low numbers of genetic markers and/or small data sets. We investigated the prediction of heterosis for egg number, egg weight and survival days in domestic white Leghorns, using ∼400 000 individuals from 47 crosses and allele frequencies on ∼53 000 genome-wide single nucleotide polymorphisms (SNPs). When heterosis is due to dominance, and dominance effects are independent of allele frequencies, heterosis is proportional to the squared difference in allele frequency (SDAF) between parental pure lines (not necessarily homozygous). Under these assumptions, a linear model including regression on SDAF partitions crossbred phenotypes into pure-line values and heterosis, even without pure-line phenotypes. We therefore used models where phenotypes of crossbreds were regressed on the SDAF between parental lines. Accuracy of prediction was determined using leave-one-out cross-validation. SDAF predicted heterosis for egg number and weight with an accuracy of ∼0.5, but did not predict heterosis for survival days. Heterosis predictions allowed preselection of pure lines before field-testing, saving ∼50% of field-testing cost with only 4% loss in heterosis. Accuracies from cross-validation were lower than from the model-fit, suggesting that accuracies previously reported in literature are overestimated. Cross-validation also indicated that dominance cannot fully explain heterosis. Nevertheless, the dominance model had considerable accuracy, clearly greater than that of a general/specific combining ability model. This work also showed that heterosis can be modelled even when pure-line phenotypes are unavailable. We concluded that SDAF is a useful predictor of heterosis in commercial layer breeding.
Bashir Surfraz, M; Fowkes, Adrian; Plante, Jeffrey P
2017-08-01
The need to find an alternative to costly animal studies for developmental and reproductive toxicity testing has shifted the focus considerably to the assessment of in vitro developmental toxicology models and the exploitation of pharmacological data for relevant molecular initiating events. We hereby demonstrate how automation can be applied successfully to handle heterogeneous oestrogen receptor data from ChEMBL. Applying expert-derived thresholds to specific bioactivities allowed an activity call to be attributed to each data entry. Human intervention further improved this mechanistic dataset which was mined to develop structure-activity relationship alerts and an expert model covering 45 chemical classes for the prediction of oestrogen receptor modulation. The evaluation of the model using FDA EDKB and Tox21 data was quite encouraging. This model can also provide a teratogenicity prediction along with the additional information it provides relevant to the query compound, all of which will require careful assessment of potential risk by experts. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calculation of Non-Bonded Forces Due to Sliding of Bundled Carbon Nanotubes
NASA Technical Reports Server (NTRS)
Frankland, S. J. V.; Bandorawalla, T.; Gates, T. S.
2003-01-01
An important consideration for load transfer in bundles of single-walled carbon nanotubes is the nonbonded (van der Waals) forces between the nanotubes and their effect on axial sliding of the nanotubes relative to each other. In this research, the non-bonded forces in a bundle of seven hexagonally packed (10,10) single-walled carbon nanotubes are represented as an axial force applied to the central nanotube. A simple model, based on momentum balance, is developed to describe the velocity response of the central nanotube to the applied force. The model is verified by comparing its velocity predictions with molecular dynamics simulations that were performed on the bundle with different force histories applied to the central nanotube. The model was found to quantitatively predict the nanotube velocities obtained from the molecular dynamics simulations. Both the model and the simulations predict a threshold force at which the nanotube releases from the bundle. This force converts to a shear yield strength of 10.5-11.0 MPa for (10,10) nanotubes in a bundle.
Implementation of algebraic stress models in a general 3-D Navier-Stokes method (PAB3D)
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.
1995-01-01
A three-dimensional multiblock Navier-Stokes code, PAB3D, which was developed for propulsion integration and general aerodynamic analysis, has been used extensively by NASA Langley and other organizations to perform both internal (exhaust) and external flow analysis of complex aircraft configurations. This code was designed to solve the simplified Reynolds Averaged Navier-Stokes equations. A two-equation k-epsilon turbulence model has been used with considerable success, especially for attached flows. Accurate predicting of transonic shock wave location and pressure recovery in separated flow regions has been more difficult. Two algebraic Reynolds stress models (ASM) have been recently implemented in the code that greatly improved the code's ability to predict these difficult flow conditions. Good agreement with Direct Numerical Simulation (DNS) for a subsonic flat plate was achieved with ASM's developed by Shih, Zhu, and Lumley and Gatski and Speziale. Good predictions were also achieved at subsonic and transonic Mach numbers for shock location and trailing edge boattail pressure recovery on a single-engine afterbody/nozzle model.
Cucinotta, Francis A.; Cacao, Eliedonna
2017-05-12
Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cucinotta, Francis A.; Cacao, Eliedonna
Cancer risk is an important concern for galactic cosmic ray (GCR) exposures, which consist of a wide-energy range of protons, heavy ions and secondary radiation produced in shielding and tissues. Relative biological effectiveness (RBE) factors for surrogate cancer endpoints in cell culture models and tumor induction in mice vary considerable, including significant variations for different tissues and mouse strains. Many studies suggest non-targeted effects (NTE) occur for low doses of high linear energy transfer (LET) radiation, leading to deviation from the linear dose response model used in radiation protection. Using the mouse Harderian gland tumor experiment, the only extensive data-setmore » for dose response modelling with a variety of particle types (>4), for the first-time a particle track structure model of tumor prevalence is used to investigate the effects of NTEs in predictions of chronic GCR exposure risk. The NTE model led to a predicted risk 2-fold higher compared to a targeted effects model. The scarcity of data with animal models for tissues that dominate human radiation cancer risk, including lung, colon, breast, liver, and stomach, suggest that studies of NTEs in other tissues are urgently needed prior to long-term space missions outside the protection of the Earth’s geomagnetic sphere.« less
Model comparison for Escherichia coli growth in pouched food.
Fujikawa, Hiroshi; Yano, Kazuyoshi; Morozumi, Satoshi
2006-06-01
We recently studied the growth characteristics of Escherichia coli cells in pouched mashed potatoes (Fujikawa et al., J. Food Hyg. Soc. Japan, 47, 95-98 (2006)). Using those experimental data, in the present study, we compared a logistic model newly developed by us with the modified Gompertz and the Baranyi models, which are used as growth models worldwide. Bacterial growth curves at constant temperatures in the range of 12 to 34 degrees C were successfully described with the new logistic model, as well as with the other models. The Baranyi gave the least error in cell number and our model gave the least error in the rate constant and the lag period. For dynamic temperature, our model successfully predicted the bacterial growth, whereas the Baranyi model considerably overestimated it. Also, there was a discrepancy between the growth curves described with the differential equations of the Baranyi model and those obtained with DMfit, a software program for Baranyi model fitting. These results indicate that the new logistic model can be used to predict bacterial growth in pouched food.
Predicting drug hydrolysis based on moisture uptake in various packaging designs.
Naversnik, Klemen; Bohanec, Simona
2008-12-18
An attempt was made to predict the stability of a moisture sensitive drug product based on the knowledge of the dependence of the degradation rate on tablet moisture. The moisture increase inside a HDPE bottle with the drug formulation was simulated with the sorption-desorption moisture transfer model, which, in turn, allowed an accurate prediction of the drug degradation kinetics. The stability prediction, obtained by computer simulation, was made in a considerably shorter time frame and required little resources compared to a conventional stability study. The prediction was finally upgraded to a stochastic Monte Carlo simulation, which allowed quantitative incorporation of uncertainty, stemming from various sources. The resulting distribution of the outcome of interest (amount of degradation product at expiry) is a comprehensive way of communicating the result along with its uncertainty, superior to single-value results or confidence intervals.
NASA Astrophysics Data System (ADS)
Huang, Wentao; Hua, Wei; Yu, Feng
2017-05-01
Due to high airgap flux density generated by magnets and the special double salient structure, the cogging torque of the flux-switching permanent magnet (FSPM) machine is considerable, which limits the further applications. Based on the model predictive current control (MPCC) and the compensation control theory, a compensating-current MPCC (CC-MPCC) scheme is proposed and implemented to counteract the dominated components in cogging torque of an existing three-phase 12/10 FSPM prototyped machine, and thus to alleviate the influence of the cogging torque and improve the smoothness of electromagnetic torque as well as speed, where a comprehensive cost function is designed to evaluate the switching states. The simulated results indicate that the proposed CC-MPCC scheme can suppress the torque ripple significantly and offer satisfactory dynamic performances by comparisons with the conventional MPCC strategy. Finally, experimental results validate both the theoretical and simulated predictions.
Theories of reasoned action and planned behavior as models of condom use: a meta-analysis.
Albarracín, D; Johnson, B T; Fishbein, M; Muellerleile, P A
2001-01-01
To examine how well the theories of reasoned action and planned behavior predict condom use, the authors synthesized 96 data sets (N = 22,594) containing associations between the models' key variables. Consistent with the theory of reasoned action's predictions, (a) condom use was related to intentions (weighted mean r. = .45), (b) intentions were based on attitudes (r. = .58) and subjective norms (r. = .39), and (c) attitudes were associated with behavioral beliefs (r. = .56) and norms were associated with normative beliefs (r. = .46). Consistent with the theory of planned behavior's predictions, perceived behavioral control was related to condom use intentions (r. = .45) and condom use (r. = .25), but in contrast to the theory, it did not contribute significantly to condom use. The strength of these associations, however, was influenced by the consideration of past behavior. Implications of these results for HIV prevention efforts are discussed.
Investigation of computational aeroacoustic tools for noise predictions of wind turbine aerofoils
NASA Astrophysics Data System (ADS)
Humpf, A.; Ferrer, E.; Munduate, X.
2007-07-01
In this work trailing edge noise levels of a research aerofoil have been computed and compared to aeroacoustic measurements using two different approaches. On the other hand, aerodynamic and aeroacoustic calculations were performed with the full Navier-Stokes CFD code Fluent [Fluent Inc 2005 Fluent 6.2 Users Guide, Lebanon, NH, USA] on the basis of a steady RANS simulation. Aerodynamic characteristics were computed by the aid of various turbulence models. By the combined usage of implemented broadband noise source models, it was tried to isolate and determine the trailing edge noise level. Throughout this work two methods of different computational cost have been tested and quantitative and qualitative results obtained. On the one hand, the semi-empirical noise prediction tool NAFNoise [Moriarty P 2005 NAFNoise User's Guide. Golden, Colorado, July. http://wind.nrel.gov/designcodes/ simulators/NAFNoise] was used to directly predict trailing edge noise by taking into consideration the nature of the experiments.
Machine Learning Estimates of Natural Product Conformational Energies
Rupp, Matthias; Bauer, Matthias R.; Wilcken, Rainer; Lange, Andreas; Reutlinger, Michael; Boeckler, Frank M.; Schneider, Gisbert
2014-01-01
Machine learning has been used for estimation of potential energy surfaces to speed up molecular dynamics simulations of small systems. We demonstrate that this approach is feasible for significantly larger, structurally complex molecules, taking the natural product Archazolid A, a potent inhibitor of vacuolar-type ATPase, from the myxobacterium Archangium gephyra as an example. Our model estimates energies of new conformations by exploiting information from previous calculations via Gaussian process regression. Predictive variance is used to assess whether a conformation is in the interpolation region, allowing a controlled trade-off between prediction accuracy and computational speed-up. For energies of relaxed conformations at the density functional level of theory (implicit solvent, DFT/BLYP-disp3/def2-TZVP), mean absolute errors of less than 1 kcal/mol were achieved. The study demonstrates that predictive machine learning models can be developed for structurally complex, pharmaceutically relevant compounds, potentially enabling considerable speed-ups in simulations of larger molecular structures. PMID:24453952
NASA Technical Reports Server (NTRS)
Balasubramaniam, R.; Rame, E.; Kizito, J.; Kassemi, M.
2006-01-01
The purpose of this report is to provide a summary of state-of-the-art predictions for two-phase flows relevant to Advanced Life Support. We strive to pick out the most used and accepted models for pressure drop and flow regime predictions. The main focus is to identify gaps in predictive capabilities in partial gravity for Lunar and Martian applications. Following a summary of flow regimes and pressure drop correlations for terrestrial and zero gravity, we analyze the fully developed annular gas-liquid flow in a straight cylindrical tube. This flow is amenable to analytical closed form solutions for the flow field and heat transfer. These solutions, valid for partial gravity as well, may be used as baselines and guides to compare experimental measurements. The flow regimes likely to be encountered in the water recovery equipment currently under consideration for space applications are provided in an appendix.
Prediction of fluctuating pressure environments associated with plume-induced separated flow fields
NASA Technical Reports Server (NTRS)
Plotkin, K. J.
1973-01-01
The separated flow environment induced by underexpanded rocket plumes during boost phase of rocket vehicles has been investigated. A simple semi-empirical model for predicting the extent of separation was developed. This model offers considerable computational economy as compared to other schemes reported in the literature, and has been shown to be in good agreement with limited flight data. The unsteady pressure field in plume-induced separated regions was investigated. It was found that fluctuations differed from those for a rigid flare only at low frequencies. The major difference between plume-induced separation and flare-induced separation was shown to be an increase in shock oscillation distance for the plume case. The prediction schemes were applied to PRR shuttle launch configuration. It was found that fluctuating pressures from plume-induced separation are not as severe as for other fluctuating environments at the critical flight condition of maximum dynamic pressure.
An analysis of dental development in Pleistocene Homo using skeletal growth and chronological age.
Šešelj, Maja
2017-07-01
This study takes a new approach to interpreting dental development in Pleistocene Homo in comparison with recent modern humans. As rates of dental development and skeletal growth are correlated given age in modern humans, using age and skeletal growth in tandem yields more accurate dental development estimates. Here, I apply these models to fossil Homo to obtain more individualized predictions and interpretations of their dental development relative to recent modern humans. Proportional odds logistic regression models based on three recent modern human samples (N = 181) were used to predict permanent mandibular tooth development scores in five Pleistocene subadults: Homo erectus/ergaster, Neanderthals, and anatomically modern humans (AMHs). Explanatory variables include a skeletal growth indicator (i.e., diaphyseal femoral length), and chronological age. AMHs Lagar Velho 1 and Qafzeh 10 share delayed incisor development, but exhibit considerable idiosyncratic variation within and across tooth types, relative to each other and to the reference samples. Neanderthals Dederiyeh 1 and Le Moustier 1 exhibit delayed incisor coupled with advanced molar development, but differences are reduced when femoral diaphysis length is considered. Dental development in KNM-WT 15,000 Homo erectus/ergaster, while advanced for his age, almost exactly matches the predictions once femoral length is included in the models. This study provides a new interpretation of dental development in KNM-WT 15000 as primarily reflecting his faster rates of skeletal growth. While the two AMH specimens exhibit considerable individual variation, the Neanderthals exhibit delayed incisor development early and advanced molar development later in ontogeny. © 2017 Wiley Periodicals, Inc.
Non-climatic thermal adaptation: implications for species' responses to climate warming.
Marshall, David J; McQuaid, Christopher D; Williams, Gray A
2010-10-23
There is considerable interest in understanding how ectothermic animals may physiologically and behaviourally buffer the effects of climate warming. Much less consideration is being given to how organisms might adapt to non-climatic heat sources in ways that could confound predictions for responses of species and communities to climate warming. Although adaptation to non-climatic heat sources (solar and geothermal) seems likely in some marine species, climate warming predictions for marine ectotherms are largely based on adaptation to climatically relevant heat sources (air or surface sea water temperature). Here, we show that non-climatic solar heating underlies thermal resistance adaptation in a rocky-eulittoral-fringe snail. Comparisons of the maximum temperatures of the air, the snail's body and the rock substratum with solar irradiance and physiological performance show that the highest body temperature is primarily controlled by solar heating and re-radiation, and that the snail's upper lethal temperature exceeds the highest climatically relevant regional air temperature by approximately 22°C. Non-climatic thermal adaptation probably features widely among marine and terrestrial ectotherms and because it could enable species to tolerate climatic rises in air temperature, it deserves more consideration in general and for inclusion into climate warming models.
Modeling of Turbulence Effect on Liquid Jet Atomization
NASA Technical Reports Server (NTRS)
Trinh, H. P.
2007-01-01
Recent studies indicate that turbulence behaviors within a liquid jet have considerable effect on the atomization process. Such turbulent flow phenomena are encountered in most practical applications of common liquid spray devices. This research aims to model the effects of turbulence occurring inside a cylindrical liquid jet to its atomization process. The two widely used atomization models Kelvin-Helmholtz (KH) instability of Reitz and the Taylor analogy breakup (TAB) of O'Rourke and Amsden portraying primary liquid jet disintegration and secondary droplet breakup, respectively, are examined. Additional terms are formulated and appropriately implemented into these two models to account for the turbulence effect. Results for the flow conditions examined in this study indicate that the turbulence terms are significant in comparison with other terms in the models. In the primary breakup regime, the turbulent liquid jet tends to break up into large drops while its intact core is slightly shorter than those without turbulence. In contrast, the secondary droplet breakup with the inside liquid turbulence consideration produces smaller drops. Computational results indicate that the proposed models provide predictions that agree reasonably well with available measured data.
Methane rising from the Deep: Hydrates, Bubbles, Oil Spills, and Global Warming
NASA Astrophysics Data System (ADS)
Leifer, I.; Rehder, G. J.; Solomon, E. A.; Kastner, M.; Asper, V. L.; Joye, S. B.
2011-12-01
Elevated methane concentrations in near-surface waters and the atmosphere have been reported for seepage from depths of nearly 1 km at the Gulf of Mexico hydrate observatory (MC118), suggesting that for some methane sources, deepsea methane is not trapped and can contribute to atmospheric greenhouse gas budgets. Ebullition is key with important sensitivity to the formation of hydrate skins and oil coatings, high-pressure solubility, bubble size and bubble plume processes. Bubble ROV tracking studies showed survival to near thermocline depths. Studies with a numerical bubble propagation model demonstrated that consideration of structure I hydrate skins transported most methane only to mid-water column depths. Instead, consideration of structure II hydrates, which are stable to far shallower depths and appropriate for natural gas mixtures, allows bubbles to survive to far shallower depths. Moreover, model predictions of vertical methane and alkane profiles and bubble size evolution were in better agreement with observations after consideration of structure II hydrate properties as well as an improved implementation of plume properties, such as currents. These results demonstrate the importance of correctly incorporating bubble hydrate processes in efforts to predict the impact of deepsea seepage as well as to understand the fate of bubble-transported oil and methane from deepsea pipeline leaks and well blowouts. Application to the DWH spill demonstrated the importance of deepsea processes to the fate of spilled subsurface oil. Because several of these parameters vary temporally (bubble flux, currents, temperature), sensitivity studies indicate the importance of real-time monitoring data.
Santos, Sílvio B.; Carvalho, Carla; Azeredo, Joana; Ferreira, Eugénio C.
2014-01-01
The prevalence and impact of bacteriophages in the ecology of bacterial communities coupled with their ability to control pathogens turn essential to understand and predict the dynamics between phage and bacteria populations. To achieve this knowledge it is essential to develop mathematical models able to explain and simulate the population dynamics of phage and bacteria. We have developed an unstructured mathematical model using delay-differential equations to predict the interactions between a broad-host-range Salmonella phage and its pathogenic host. The model takes into consideration the main biological parameters that rule phage-bacteria interactions likewise the adsorption rate, latent period, burst size, bacterial growth rate, and substrate uptake rate, among others. The experimental validation of the model was performed with data from phage-interaction studies in a 5 L bioreactor. The key and innovative aspect of the model was the introduction of variations in the latent period and adsorption rate values that are considered as constants in previous developed models. By modelling the latent period as a normal distribution of values and the adsorption rate as a function of the bacterial growth rate it was possible to accurately predict the behaviour of the phage-bacteria population. The model was shown to predict simulated data with a good agreement with the experimental observations and explains how a lytic phage and its host bacteria are able to coexist. PMID:25051248
NASA Astrophysics Data System (ADS)
Barlas, Thanasis; Jost, Eva; Pirrung, Georg; Tsiantas, Theofanis; Riziotis, Vasilis; Navalkar, Sachin T.; Lutz, Thorsten; van Wingerden, Jan-Willem
2016-09-01
Simulations of a stiff rotor configuration of the DTU 10MW Reference Wind Turbine are performed in order to assess the impact of prescribed flap motion on the aerodynamic loads on a blade sectional and rotor integral level. Results of the engineering models used by DTU (HAWC2), TUDelft (Bladed) and NTUA (hGAST) are compared to the CFD predictions of USTUTT-IAG (FLOWer). Results show fairly good comparison in terms of axial loading, while alignment of tangential and drag-related forces across the numerical codes needs to be improved, together with unsteady corrections associated with rotor wake dynamics. The use of a new wake model in HAWC2 shows considerable accuracy improvements.
NASA Technical Reports Server (NTRS)
Noll, Thomas E.
1990-01-01
The paper describes recent accomplishments and current research projects along four main thrusts in aeroservoelasticity at NASA Langley. One activity focuses on enhancing the modeling and analysis procedures to accurately predict aeroservoelastic interactions. Improvements to the minimum-state method of approximating unsteady aerodynamics are shown to provide precise low-order models for design and simulation tasks. Recent extensions in aerodynamic correction-factor methodology are also described. With respect to analysis procedures, the paper reviews novel enhancements to matched filter theory and random process theory for predicting the critical gust profile and the associated time-correlated gust loads for structural design considerations. Two research projects leading towards improved design capability are also summarized: (1) an integrated structure/control design capability and (2) procedures for obtaining low-order robust digital control laws for aeroelastic applications.
Eggers, Sander M; Taylor, Myra; Sathiparsad, Reshma; Bos, Arjan Er; de Vries, Hein
2015-11-01
Despite its popularity, few studies have assessed the temporal stability and cross-lagged effects of the Theory of Planned Behavior factors: Attitude, subjective norms and self-efficacy. For this study, 298 adolescent learners from KwaZulu-Natal, South Africa, filled out a Theory of Planned Behavior questionnaire on teenage pregnancy at baseline and after 6 months. Structural equation modeling showed that there were considerable cross-lagged effects between attitude and subjective norms. Temporal stability was moderate with test-retest correlations ranging from 0.37 to 0.51 and the model was able to predict intentions to have safe sex (R2 = 0.69) Implications for practice and future research are discussed. © The Author(s) 2013.
Modeling marine boundary-layer clouds with a two-layer model: A one-dimensional simulation
NASA Technical Reports Server (NTRS)
Wang, Shouping
1993-01-01
A two-layer model of the marine boundary layer is described. The model is used to simulate both stratocumulus and shallow cumulus clouds in downstream simulations. Over cold sea surfaces, the model predicts a relatively uniform structure in the boundary layer with 90%-100% cloud fraction. Over warm sea surfaces, the model predicts a relatively strong decoupled and conditionally unstable structure with a cloud fraction between 30% and 60%. A strong large-scale divergence considerably limits the height of the boundary layer and decreases relative humidity in the upper part of the cloud layer; thus, a low cloud fraction results. The efffects of drizzle on the boundary-layer structure and cloud fraction are also studied with downstream simulations. It is found that drizzle dries and stabilizes the cloud layer and tends to decouple the cloud from the subcloud layer. Consequently, solid stratocumulus clouds may break up and the cloud fraction may decrease because of drizzle.
Prediction of Transonic Vortex Flows Using Linear and Nonlinear Turbulent Eddy Viscosity Models
NASA Technical Reports Server (NTRS)
Bartels, Robert E.; Gatski, Thomas B.
2000-01-01
Three-dimensional transonic flow over a delta wing is investigated with a focus on the effect of transition and influence of turbulence stress anisotropies. The performance of linear eddy viscosity models and an explicit algebraic stress model is assessed at the start of vortex flow, and the results compared with experimental data. To assess the effect of transition location, computations that either fix transition or are fully turbulent are performed. To assess the effect of the turbulent stress anisotropy, comparisons are made between predictions from the algebraic stress model and the linear eddy viscosity models. Both transition location and turbulent stress anisotropy significantly affect the 3D flow field. The most significant effect is found to be the modeling of transition location. At a Mach number of 0.90, the computed solution changes character from steady to unsteady depending on transition onset. Accounting for the anisotropies in the turbulent stresses also considerably impacts the flow, most notably in the outboard region of flow separation.
Crook, Julia E.; Thomas, Colleen S.; Siersema, Peter D.; Rex, Douglas K.; Wallace, Michael B.
2017-01-01
Objective The adenoma detection rate (ADR) varies widely between physicians, possibly due to patient population differences, hampering direct ADR comparison. We developed and validated a prediction model for adenoma detection in an effort to determine if physicians’ ADRs should be adjusted for patient-related factors. Materials and methods Screening and surveillance colonoscopy data from the cross-sectional multicenter cluster-randomized Endoscopic Quality Improvement Program-3 (EQUIP-3) study (NCT02325635) was used. The dataset was split into two cohorts based on center. A prediction model for detection of ≥1 adenoma was developed using multivariable logistic regression and subsequently internally (bootstrap resampling) and geographically validated. We compared predicted to observed ADRs. Results The derivation (5 centers, 35 physicians, overall-ADR: 36%) and validation (4 centers, 31 physicians, overall-ADR: 40%) cohort included respectively 9934 and 10034 patients (both cohorts: 48% male, median age 60 years). Independent predictors for detection of ≥1 adenoma were: age (optimism-corrected odds ratio (OR): 1.02; 95%-confidence interval (CI): 1.02–1.03), male sex (OR: 1.73; 95%-CI: 1.60–1.88), body mass index (OR: 1.02; 95%-CI: 1.01–1.03), American Society of Anesthesiology physical status class (OR class II vs. I: 1.29; 95%-CI: 1.17–1.43, OR class ≥III vs. I: 1.57; 95%-CI: 1.32–1.86), surveillance versus screening (OR: 1.39; 95%-CI: 1.27–1.53), and Hispanic or Latino ethnicity (OR: 1.13; 95%-CI: 1.00–1.27). The model’s discriminative ability was modest (C-statistic in the derivation: 0.63 and validation cohort: 0.60). The observed ADR was considerably lower than predicted for 12/66 (18.2%) physicians and 2/9 (22.2%) centers, and considerably higher than predicted for 18/66 (27.3%) physicians and 4/9 (44.4%) centers. Conclusion The substantial variation in ADRs could only partially be explained by patient-related factors. These data suggest that ADR variation could likely also be due to other factors, e.g. physician or technical issues. PMID:28957445
NASA Astrophysics Data System (ADS)
Schliep, E. M.; Gelfand, A. E.; Holland, D. M.
2015-12-01
There is considerable demand for accurate air quality information in human health analyses. The sparsity of ground monitoring stations across the United States motivates the need for advanced statistical models to predict air quality metrics, such as PM2.5, at unobserved sites. Remote sensing technologies have the potential to expand our knowledge of PM2.5 spatial patterns beyond what we can predict from current PM2.5 monitoring networks. Data from satellites have an additional advantage in not requiring extensive emission inventories necessary for most atmospheric models that have been used in earlier data fusion models for air pollution. Statistical models combining monitoring station data with satellite-obtained aerosol optical thickness (AOT), also referred to as aerosol optical depth (AOD), have been proposed in the literature with varying levels of success in predicting PM2.5. The benefit of using AOT is that satellites provide complete gridded spatial coverage. However, the challenges involved with using it in fusion models are (1) the correlation between the two data sources varies both in time and in space, (2) the data sources are temporally and spatially misaligned, and (3) there is extensive missingness in the monitoring data and also in the satellite data due to cloud cover. We propose a hierarchical autoregressive spatially varying coefficients model to jointly model the two data sources, which addresses the foregoing challenges. Additionally, we offer formal model comparison for competing models in terms of model fit and out of sample prediction of PM2.5. The models are applied to daily observations of PM2.5 and AOT in the summer months of 2013 across the conterminous United States. Most notably, during this time period, we find small in-sample improvement incorporating AOT into our autoregressive model but little out-of-sample predictive improvement.
Landfill gas control at military installations. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shafer, R.A.; Renta-Babb, A.; Bandy, J.T.
1984-01-01
This report provides information useful to Army personnel responsible for recognizing and solving potential problems from gas generated by landfills. Information is provided on recognizing and gauging the magnitude of landfill gas problems; selecting appropriate gas control strategies, procedures, and equipment; use of computer modeling to predict gas production and migration and the success of gas control devices; and safety considerations.
Acorn Production Prediction Models for Five Common Oak Species of the Eastern United States
Anita K. Rose; Cathryn H. Greenberg; Todd M. Fearer
2011-01-01
Acorn production varies considerably among oak (Quercus) species, individual trees, years, and locations, which directly affects oak regeneration and populations of wildlife species that depend on acorns for food. Hard mast indices provide a relative ranking and basis for comparison of within- and between-year acorn crop size at a broad scale, but do...
Background: Serum concentrations of polybrominated diphenyl ethers (PBDEs) in U.S. women are believed to be among the world’s highest; however, little information exists on the partitioning of PBDEs between serum and breast milk and how this may affect infant exposure. Obj...
Background: Serum concentrations of polybrominated diphenyl ethers (PBDEs) in U.S. women are believed to be among the world’s highest; however, little information exists on the partitioning of PBDEs between serum and breast milk and how this may affect infant exposure. Objecti...
Simulation and Prediction of Warm Season Drought in North America
NASA Technical Reports Server (NTRS)
Wang, Hailan; Chang, Yehui; Schubert, Siegfried D.; Koster, Randal D.
2018-01-01
This presentation presents our recent work on model simulation and prediction of warm season drought in North America. The emphasis will be on the contribution from the leading modes of subseasonal atmospheric circulation variability, which are often present in the form of stationary Rossby waves. Here we take advantage of the results from observations, reanalyses, and simulations and reforecasts performed using the NASA Goddard Earth Observing System (GEOS-5) atmospheric and coupled General Circulation Model (GCM). Our results show that stationary Rossby waves play a key role in Northern Hemisphere (NH) atmospheric circulation and surface meteorology variability on subseasonal timescales. In particular, such waves have been crucial to the development of recent short-term warm season heat waves and droughts over North America (e.g. the 1988, 1998, and 2012 summer droughts) and northern Eurasia (e.g., the 2003 summer heat wave over Europe and the 2010 summer drought and heat wave over Russia). Through an investigation of the physical processes by which these waves lead to the development of warm season drought in North America, it is further found that these waves can serve as a potential source of drought predictability. In order to properly represent their effect and exploit this source of predictability, a model needs to correctly simulate the Northern Hemisphere (NH) mean jet streams and be able to predict the sources of these waves. Given the NASA GEOS-5 AGCM deficiency in simulating the NH jet streams and tropical convection during boreal summer, an approach has been developed to artificially remove much of model mean biases, which leads to considerable improvement in model simulation and prediction of stationary Rossby waves and drought development in North America. Our study points to the need to identify key model biases that limit model simulation and prediction of regional climate extremes, and diagnose the origin of these biases so as to inform modeling group for model improvement.
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-01-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of log BB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (log P), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental log BB data had been determined in vivo. In particular, since molecules with log BB > 0.3 cross the blood-brain barrier (BBB) readily while molecules with log BB < −1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the log BB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. PMID:20427217
Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano
2010-06-01
The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.
Al-Chokhachy, Robert K.; Wegner, Seth J.; Isaak, Daniel J.; Kershner, Jeffrey L.
2013-01-01
Understanding a species’ thermal niche is becoming increasingly important for management and conservation within the context of global climate change, yet there have been surprisingly few efforts to compare assessments of a species’ thermal niche across methods. To address this uncertainty, we evaluated the differences in model performance and interpretations of a species’ thermal niche when using different measures of stream temperature and surrogates for stream temperature. Specifically, we used a logistic regression modeling framework with three different indicators of stream thermal conditions (elevation, air temperature, and stream temperature) referenced to a common set of Brook Trout Salvelinus fontinalis distribution data from the Boise River basin, Idaho. We hypothesized that stream temperature predictions that were contemporaneous with fish distribution data would have stronger predictive performance than composite measures of stream temperature or any surrogates for stream temperature. Across the different indicators of thermal conditions, the highest measure of accuracy was found for the model based on stream temperature predictions that were contemporaneous with fish distribution data (percent correctly classified = 71%). We found considerable differences in inferences across models, with up to 43% disagreement in the amount of stream habitat that was predicted to be suitable. The differences in performance between models support the growing efforts in many areas to develop accurate stream temperature models for investigations of species’ thermal niches.
NASA Astrophysics Data System (ADS)
Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa
2017-08-01
Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
Real-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model
NASA Astrophysics Data System (ADS)
Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.
2014-12-01
Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.
Combining Thermal And Structural Analyses
NASA Technical Reports Server (NTRS)
Winegar, Steven R.
1990-01-01
Computer code makes programs compatible so stresses and deformations calculated. Paper describes computer code combining thermal analysis with structural analysis. Called SNIP (for SINDA-NASTRAN Interfacing Program), code provides interface between finite-difference thermal model of system and finite-element structural model when no node-to-element correlation between models. Eliminates much manual work in converting temperature results of SINDA (Systems Improved Numerical Differencing Analyzer) program into thermal loads for NASTRAN (NASA Structural Analysis) program. Used to analyze concentrating reflectors for solar generation of electric power. Large thermal and structural models needed to predict distortion of surface shapes, and SNIP saves considerable time and effort in combining models.
Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop
NASA Technical Reports Server (NTRS)
Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.
2014-01-01
Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
2014-02-15
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Lulin, E-mail: lulin.yuan@duke.edu; Wu, Q. Jackie; Yin, Fang-Fang
Purpose: Sparing of single-side parotid gland is a common practice in head-and-neck (HN) intensity modulated radiation therapy (IMRT) planning. It is a special case of dose sparing tradeoff between different organs-at-risk. The authors describe an improved mathematical model for predicting achievable dose sparing in parotid glands in HN IMRT planning that incorporates single-side sparing considerations based on patient anatomy and learning from prior plan data. Methods: Among 68 HN cases analyzed retrospectively, 35 cases had physician prescribed single-side parotid sparing preferences. The single-side sparing model was trained with cases which had single-side sparing preferences, while the standard model was trainedmore » with the remainder of cases. A receiver operating characteristics (ROC) analysis was performed to determine the best criterion that separates the two case groups using the physician's single-side sparing prescription as ground truth. The final predictive model (combined model) takes into account the single-side sparing by switching between the standard and single-side sparing models according to the single-side sparing criterion. The models were tested with 20 additional cases. The significance of the improvement of prediction accuracy by the combined model over the standard model was evaluated using the Wilcoxon rank-sum test. Results: Using the ROC analysis, the best single-side sparing criterion is (1) the predicted median dose of one parotid is higher than 24 Gy; and (2) that of the other is higher than 7 Gy. This criterion gives a true positive rate of 0.82 and a false positive rate of 0.19, respectively. For the bilateral sparing cases, the combined and the standard models performed equally well, with the median of the prediction errors for parotid median dose being 0.34 Gy by both models (p = 0.81). For the single-side sparing cases, the standard model overestimates the median dose by 7.8 Gy on average, while the predictions by the combined model differ from actual values by only 2.2 Gy (p = 0.005). Similarly, the sum of residues between the modeled and the actual plan DVHs is the same for the bilateral sparing cases by both models (p = 0.67), while the standard model predicts significantly higher DVHs than the combined model for the single-side sparing cases (p = 0.01). Conclusions: The combined model for predicting parotid sparing that takes into account single-side sparing improves the prediction accuracy over the previous model.« less
Limit of Predictability in Mantle Convection
NASA Astrophysics Data System (ADS)
Bello, L.; Coltice, N.; Rolf, T.; Tackley, P. J.
2013-12-01
Linking mantle convection models with Earth's tectonic history has received considerable attention in recent years: modeling the evolution of supercontinent cycles, predicting present-day mantle structure or improving plate reconstructions. Predictions of future supercontinents are currently being made based on seismic tomography images, plate motion history and mantle convection models, and methods of data assimilation for mantle flow are developing. However, so far there are no studies of the limit of predictability these models are facing. Indeed, given the chaotic nature of mantle convection, we can expect forecasts and hindcasts to have a limited range of predictability. We propose here to use an approach similar to those used in dynamic meteorology, and more recently for the geodynamo, to evaluate the predictability limit of mantle dynamics forecasts. Following the pioneering works in weather forecast (Lorenz 1965), we study the time evolution of twin experiments, started from two very close initial temperature fields and monitor the error growth. We extract a characteristic time of the system, known as the e-folding timescale, which will be used to estimate the predictability limit. The final predictability time will depend on the imposed initial error and the error tolerance in our model. We compute 3D spherical convection solutions using StagYY (Tackley, 2008). We first evaluate the influence of the Rayleigh number on the limit of predictability of isoviscous convection. Then, we investigate the effects of various rheologies, from the simplest (isoviscous mantle) to more complex ones (plate-like behavior and floating continents). We show that the e-folding time increases with the wavelength of the flow and reaches 10Myrs with plate-like behavior and continents. Such an e-folding time together with the uncertainties in mantle temperature distribution suggests prediction of mantle structure from an initial given state is limited to <50 Myrs. References: 1. Lorenz, B. E. N., Norake, D. & Meteorologiake, I. A study of the predictability of a 28-variable atmospheric model. Tellus XXVII, 322-333 (1965). 2. Tackley, P. J. Modelling compressible mantle convection with large viscosity contrasts in a three-dimensional spherical shell using the yin-yang grid. Physics of the Earth and Planetary Interiors 171, 7-18 (2008).
Stabilizing l1-norm prediction models by supervised feature grouping.
Kamkar, Iman; Gupta, Sunil Kumar; Phung, Dinh; Venkatesh, Svetha
2016-02-01
Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making. Copyright © 2015 Elsevier Inc. All rights reserved.
Keep it simple? Predicting primary health care costs with clinical morbidity measures
Brilleman, Samuel L.; Gravelle, Hugh; Hollinghurst, Sandra; Purdy, Sarah; Salisbury, Chris; Windmeijer, Frank
2014-01-01
Models of the determinants of individuals’ primary care costs can be used to set capitation payments to providers and to test for horizontal equity. We compare the ability of eight measures of patient morbidity and multimorbidity to predict future primary care costs and examine capitation payments based on them. The measures were derived from four morbidity descriptive systems: 17 chronic diseases in the Quality and Outcomes Framework (QOF); 17 chronic diseases in the Charlson scheme; 114 Expanded Diagnosis Clusters (EDCs); and 68 Adjusted Clinical Groups (ACGs). These were applied to patient records of 86,100 individuals in 174 English practices. For a given disease description system, counts of diseases and sets of disease dummy variables had similar explanatory power. The EDC measures performed best followed by the QOF and ACG measures. The Charlson measures had the worst performance but still improved markedly on models containing only age, gender, deprivation and practice effects. Comparisons of predictive power for different morbidity measures were similar for linear and exponential models, but the relative predictive power of the models varied with the morbidity measure. Capitation payments for an individual patient vary considerably with the different morbidity measures included in the cost model. Even for the best fitting model large differences between expected cost and capitation for some types of patient suggest incentives for patient selection. Models with any of the morbidity measures show higher cost for more deprived patients but the positive effect of deprivation on cost was smaller in better fitting models. PMID:24657375
Comparison of statistical models for analyzing wheat yield time series.
Michel, Lucie; Makowski, David
2013-01-01
The world's population is predicted to exceed nine billion by 2050 and there is increasing concern about the capability of agriculture to feed such a large population. Foresight studies on food security are frequently based on crop yield trends estimated from yield time series provided by national and regional statistical agencies. Various types of statistical models have been proposed for the analysis of yield time series, but the predictive performances of these models have not yet been evaluated in detail. In this study, we present eight statistical models for analyzing yield time series and compare their ability to predict wheat yield at the national and regional scales, using data provided by the Food and Agriculture Organization of the United Nations and by the French Ministry of Agriculture. The Holt-Winters and dynamic linear models performed equally well, giving the most accurate predictions of wheat yield. However, dynamic linear models have two advantages over Holt-Winters models: they can be used to reconstruct past yield trends retrospectively and to analyze uncertainty. The results obtained with dynamic linear models indicated a stagnation of wheat yields in many countries, but the estimated rate of increase of wheat yield remained above 0.06 t ha⁻¹ year⁻¹ in several countries in Europe, Asia, Africa and America, and the estimated values were highly uncertain for several major wheat producing countries. The rate of yield increase differed considerably between French regions, suggesting that efforts to identify the main causes of yield stagnation should focus on a subnational scale.
A molecular thermodynamic model for the stability of hepatitis B capsids
NASA Astrophysics Data System (ADS)
Kim, Jehoon; Wu, Jianzhong
2014-06-01
Self-assembly of capsid proteins and genome encapsidation are two critical steps in the life cycle of most plant and animal viruses. A theoretical description of such processes from a physiochemical perspective may help better understand viral replication and morphogenesis thus provide fresh insights into the experimental studies of antiviral strategies. In this work, we propose a molecular thermodynamic model for predicting the stability of Hepatitis B virus (HBV) capsids either with or without loading nucleic materials. With the key components represented by coarse-grained thermodynamic models, the theoretical predictions are in excellent agreement with experimental data for the formation free energies of empty T4 capsids over a broad range of temperature and ion concentrations. The theoretical model predicts T3/T4 dimorphism also in good agreement with the capsid formation at in vivo and in vitro conditions. In addition, we have studied the stability of the viral particles in response to physiological cellular conditions with the explicit consideration of the hydrophobic association of capsid subunits, electrostatic interactions, molecular excluded volume effects, entropy of mixing, and conformational changes of the biomolecular species. The course-grained model captures the essential features of the HBV nucleocapsid stability revealed by recent experiments.
Computational design of water-soluble α-helical barrels.
Thomson, Andrew R; Wood, Christopher W; Burton, Antony J; Bartlett, Gail J; Sessions, Richard B; Brady, R Leo; Woolfson, Derek N
2014-10-24
The design of protein sequences that fold into prescribed de novo structures is challenging. General solutions to this problem require geometric descriptions of protein folds and methods to fit sequences to these. The α-helical coiled coils present a promising class of protein for this and offer considerable scope for exploring hitherto unseen structures. For α-helical barrels, which have more than four helices and accessible central channels, many of the possible structures remain unobserved. Here, we combine geometrical considerations, knowledge-based scoring, and atomistic modeling to facilitate the design of new channel-containing α-helical barrels. X-ray crystal structures of the resulting designs match predicted in silico models. Furthermore, the observed channels are chemically defined and have diameters related to oligomer state, which present routes to design protein function. Copyright © 2014, American Association for the Advancement of Science.
Consideration of Reaction Intermediates in Structure- Activity Relationships: A Key to Understanding and Prediction
A structure-activity relationship (SAR) represents an empirical means for generalizing chemical information relative to biological activity, and is frequent...
Application of a baseflow filter for evaluating model structure suitability of the IHACRES CMD
NASA Astrophysics Data System (ADS)
Kim, H. S.
2015-02-01
The main objective of this study was to assess the predictive uncertainty from the rainfall-runoff model structure coupling a conceptual module (non-linear module) with a metric transfer function module (linear module). The methodology was primarily based on the comparison between the outputs of the rainfall-runoff model and those from an alternative model approach. An alternative model approach was used to minimise uncertainties arising from data and the model structure. A baseflow filter was adopted to better understand deficiencies in the forms of the rainfall-runoff model by avoiding the uncertainties related to data and the model structure. The predictive uncertainty from the model structure was investigated for representative groups of catchments having similar hydrological response characteristics in the upper Murrumbidgee Catchment. In the assessment of model structure suitability, the consistency (or variability) of catchment response over time and space in model performance and parameter values has been investigated to detect problems related to the temporal and spatial variability of the model accuracy. The predictive error caused by model uncertainty was evaluated through analysis of the variability of the model performance and parameters. A graphical comparison of model residuals, effective rainfall estimates and hydrographs was used to determine a model's ability related to systematic model deviation between simulated and observed behaviours and general behavioural differences in the timing and magnitude of peak flows. The model's predictability was very sensitive to catchment response characteristics. The linear module performs reasonably well in the wetter catchments but has considerable difficulties when applied to the drier catchments where a hydrologic response is dominated by quick flow. The non-linear module has a potential limitation in its capacity to capture non-linear processes for converting observed rainfall into effective rainfall in both the wetter and drier catchments. The comparative study based on a better quantification of the accuracy and precision of hydrological modelling predictions yields a better understanding for the potential improvement of model deficiencies.
How predictable is the winter extremely cold days over temperate East Asia?
NASA Astrophysics Data System (ADS)
Luo, Xiao; Wang, Bin
2017-04-01
Skillful seasonal prediction of the number of extremely cold day (NECD) has considerable benefits for climate risk management and economic planning. Yet, predictability of NECD associated with East Asia winter monsoon remains largely unexplored. The present work estimates the NECD predictability in temperate East Asia (TEA, 30°-50°N, 110°-140°E) where the current dynamical models exhibit limited prediction skill. We show that about 50 % of the total variance of the NECD in TEA region is likely predictable, which is estimated by using a physics-based empirical (P-E) model with three consequential autumn predictors, i.e., developing El Niño/La Niña, Eurasian Arctic Ocean temperature anomalies, and geopotential height anomalies over northern and eastern Asia. We find that the barotropic geopotential height anomaly over Asia can persist from autumn to winter, thereby serving as a predictor for winter NECD. Further analysis reveals that the sources of the NECD predictability and the physical basis for prediction of NECD are essentially the same as those for prediction of winter mean temperature over the same region. This finding implies that forecasting seasonal mean temperature can provide useful information for prediction of extreme cold events. Interpretation of the lead-lag linkages between the three predictors and the predictand is provided for stimulating further studies.
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
Physiologically Based Pharmacokinetic Model for Terbinafine in Rats and Humans
Hosseini-Yeganeh, Mahboubeh; McLachlan, Andrew J.
2002-01-01
The aim of this study was to develop a physiologically based pharmacokinetic (PB-PK) model capable of describing and predicting terbinafine concentrations in plasma and tissues in rats and humans. A PB-PK model consisting of 12 tissue and 2 blood compartments was developed using concentration-time data for tissues from rats (n = 33) after intravenous bolus administration of terbinafine (6 mg/kg of body weight). It was assumed that all tissues except skin and testis tissues were well-stirred compartments with perfusion rate limitations. The uptake of terbinafine into skin and testis tissues was described by a PB-PK model which incorporates a membrane permeability rate limitation. The concentration-time data for terbinafine in human plasma and tissues were predicted by use of a scaled-up PB-PK model, which took oral absorption into consideration. The predictions obtained from the global PB-PK model for the concentration-time profile of terbinafine in human plasma and tissues were in close agreement with the observed concentration data for rats. The scaled-up PB-PK model provided an excellent prediction of published terbinafine concentration-time data obtained after the administration of single and multiple oral doses in humans. The estimated volume of distribution at steady state (Vss) obtained from the PB-PK model agreed with the reported value of 11 liters/kg. The apparent volume of distribution of terbinafine in skin and adipose tissues accounted for 41 and 52%, respectively, of the Vss for humans, indicating that uptake into and redistribution from these tissues dominate the pharmacokinetic profile of terbinafine. The PB-PK model developed in this study was capable of accurately predicting the plasma and tissue terbinafine concentrations in both rats and humans and provides insight into the physiological factors that determine terbinafine disposition. PMID:12069977
Compound analysis via graph kernels incorporating chirality.
Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya
2010-12-01
High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.
Superheavy magnetic monopoles and the standard cosmology
NASA Astrophysics Data System (ADS)
Turner, M. S.
1984-10-01
The superheavy magnetic monopoles predicted to exist in grand unified theories (GUTs) are for particle physics, astrophysics and cosmology. Astrophysical and cosmological considerations are invaluable in the study of the properties of GUT monopoles. Because of the glut of monopoles predicted in the standard cosmology for the simplest GUTs. The simplest GUTs and the standard cosmology are not compatible. This is a very important piece of information about physics at unification energies and about the earliest movements of the Universe. The cosmological consequences of GUT monopoles within the context of the standard hot big bang model are reviewed.
Weak shock propagation through a turbulent atmosphere
NASA Technical Reports Server (NTRS)
Pierce, Allan D.; Sparrow, Victor W.
1990-01-01
Consideration is given to the propagation through turbulence of transient pressure waveforms whose initial onset at any given point is an abrupt shock. The work is motivated by the desire to eventually develop a mathematical model for predicting statistical features, such as peak overpressures and spike widths, of sonic booms generated by supersonic aircraft. It is argued that the transient waveform received at points where x greater than 0 will begin with a pressure jump and a formulation is developed for predicting the amount of this jump and the time derivatives of the pressure waveform immediately following the jump.
Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-11-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data.
A multidimensional stability model for predicting shallow landslide size and shape across landscapes
Milledge, David G; Bellugi, Dino; McKean, Jim A; Densmore, Alexander L; Dietrich, William E
2014-01-01
The size of a shallow landslide is a fundamental control on both its hazard and geomorphic importance. Existing models are either unable to predict landslide size or are computationally intensive such that they cannot practically be applied across landscapes. We derive a model appropriate for natural slopes that is capable of predicting shallow landslide size but simple enough to be applied over entire watersheds. It accounts for lateral resistance by representing the forces acting on each margin of potential landslides using earth pressure theory and by representing root reinforcement as an exponential function of soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are well constrained by field data. The model predicts failure for the observed scar geometry and finds that larger or smaller conformal shapes are more stable. Numerical experiments demonstrate that friction on the boundaries of a potential landslide increases considerably the magnitude of lateral reinforcement, relative to that due to root cohesion alone. We find that there is a critical depth in both cohesive and cohesionless soils, resulting in a minimum size for failure, which is consistent with observed size-frequency distributions. Furthermore, the differential resistance on the boundaries of a potential landslide is responsible for a critical landslide shape which is longer than it is wide, consistent with observed aspect ratios. Finally, our results show that minimum size increases as approximately the square of failure surface depth, consistent with observed landslide depth-area data. PMID:26213663
Modelling invasion for a habitat generalist and a specialist plant species
Evangelista, P.H.; Kumar, S.; Stohlgren, T.J.; Jarnevich, C.S.; Crall, A.W.; Norman, J. B.; Barnett, D.T.
2008-01-01
Predicting suitable habitat and the potential distribution of invasive species is a high priority for resource managers and systems ecologists. Most models are designed to identify habitat characteristics that define the ecological niche of a species with little consideration to individual species' traits. We tested five commonly used modelling methods on two invasive plant species, the habitat generalist Bromus tectorum and habitat specialist Tamarix chinensis, to compare model performances, evaluate predictability, and relate results to distribution traits associated with each species. Most of the tested models performed similarly for each species; however, the generalist species proved to be more difficult to predict than the specialist species. The highest area under the receiver-operating characteristic curve values with independent validation data sets of B. tectorum and T. chinensis was 0.503 and 0.885, respectively. Similarly, a confusion matrix for B. tectorum had the highest overall accuracy of 55%, while the overall accuracy for T. chinensis was 85%. Models for the generalist species had varying performances, poor evaluations, and inconsistent results. This may be a result of a generalist's capability to persist in a wide range of environmental conditions that are not easily defined by the data, independent variables or model design. Models for the specialist species had consistently strong performances, high evaluations, and similar results among different model applications. This is likely a consequence of the specialist's requirement for explicit environmental resources and ecological barriers that are easily defined by predictive models. Although defining new invaders as generalist or specialist species can be challenging, model performances and evaluations may provide valuable information on a species' potential invasiveness.
Copula based prediction models: an application to an aortic regurgitation study
Kumar, Pranesh; Shoukri, Mohamed M
2007-01-01
Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974
NASA Astrophysics Data System (ADS)
Anderson, Brian J.; Korth, Haje; Welling, Daniel T.; Merkin, Viacheslav G.; Wiltberger, Michael J.; Raeder, Joachim; Barnes, Robin J.; Waters, Colin L.; Pulkkinen, Antti A.; Rastaetter, Lutz
2017-02-01
Two of the geomagnetic storms for the Space Weather Prediction Center Geospace Environment Modeling challenge occurred after data were first acquired by the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE). We compare Birkeland currents from AMPERE with predictions from four models for the 4-5 April 2010 and 5-6 August 2011 storms. The four models are the Weimer (2005b) field-aligned current statistical model, the Lyon-Fedder-Mobarry magnetohydrodynamic (MHD) simulation, the Open Global Geospace Circulation Model MHD simulation, and the Space Weather Modeling Framework MHD simulation. The MHD simulations were run as described in Pulkkinen et al. (2013) and the results obtained from the Community Coordinated Modeling Center. The total radial Birkeland current, ITotal, and the distribution of radial current density, Jr, for all models are compared with AMPERE results. While the total currents are well correlated, the quantitative agreement varies considerably. The Jr distributions reveal discrepancies between the models and observations related to the latitude distribution, morphologies, and lack of nightside current systems in the models. The results motivate enhancing the simulations first by increasing the simulation resolution and then by examining the relative merits of implementing more sophisticated ionospheric conductance models, including ionospheric outflows or other omitted physical processes. Some aspects of the system, including substorm timing and location, may remain challenging to simulate, implying a continuing need for real-time specification.
Zeng, Qing; Zhang, Yamian; Sun, Gongqi; Duo, Hairui; Wen, Li; Lei, Guangchun
2015-01-01
Scaly-sided Merganser is a globally endangered species restricted to eastern Asia. Estimating its population is difficult and considerable gap exists between populations at its breeding grounds and wintering sites. In this study, we built a species distribution model (SDM) using Maxent with presence-only data to predict the potential wintering habitat for Scaly-sided Merganser in China. Area under the receiver operating characteristic curve (AUC) method suggests high predictive power of the model (training and testing AUC were 0.97 and 0.96 respectively). The most significant environmental variables included annual mean temperature, mean temperature of coldest quarter, minimum temperature of coldest month and precipitation of driest quarter. Suitable conditions for Scaly-sided Merganser are predicted in the middle and lower reaches of the Yangtze River, especially in Jiangxi, Hunan and Hubei Provinces. The predicted suitable habitat embraces 6,984 km of river. Based on survey results from three consecutive winters (2010–2012) and previous studies, we estimated that the entire wintering population of Scaly-sided Merganser in China to be 3,561 ± 478 individuals, which is consistent with estimate in its breeding ground. PMID:25646969
Accurate and scalable social recommendation using mixed-membership stochastic block models.
Godoy-Lorite, Antonia; Guimerà, Roger; Moore, Cristopher; Sales-Pardo, Marta
2016-12-13
With increasing amounts of information available, modeling and predicting user preferences-for books or articles, for example-are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users' ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user's and item's groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets.
e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-Learning Methods
Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu
2018-01-01
In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist. PMID:29651416
Accurate and scalable social recommendation using mixed-membership stochastic block models
Godoy-Lorite, Antonia; Moore, Cristopher
2016-01-01
With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773
NASA Astrophysics Data System (ADS)
Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei
2017-02-01
Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.
e-Bitter: Bitterant Prediction by the Consensus Voting From the Machine-learning Methods
NASA Astrophysics Data System (ADS)
Zheng, Suqing; Jiang, Mengying; Zhao, Chengwei; Zhu, Rui; Hu, Zhicheng; Xu, Yong; Lin, Fu
2018-03-01
In-silico bitterant prediction received the considerable attention due to the expensive and laborious experimental-screening of the bitterant. In this work, we collect the fully experimental dataset containing 707 bitterants and 592 non-bitterants, which is distinct from the fully or partially hypothetical non-bitterant dataset used in the previous works. Based on this experimental dataset, we harness the consensus votes from the multiple machine-learning methods (e.g., deep learning etc.) combined with the molecular fingerprint to build the bitter/bitterless classification models with five-fold cross-validation, which are further inspected by the Y-randomization test and applicability domain analysis. One of the best consensus models affords the accuracy, precision, specificity, sensitivity, F1-score, and Matthews correlation coefficient (MCC) of 0.929, 0.918, 0.898, 0.954, 0.936, and 0.856 respectively on our test set. For the automatic prediction of bitterant, a graphic program “e-Bitter” is developed for the convenience of users via the simple mouse click. To our best knowledge, it is for the first time to adopt the consensus model for the bitterant prediction and develop the first free stand-alone software for the experimental food scientist.
Acoustic and Lexical Representations for Affect Prediction in Spontaneous Conversations.
Cao, Houwei; Savran, Arman; Verma, Ragini; Nenkova, Ani
2015-01-01
In this article we investigate what representations of acoustics and word usage are most suitable for predicting dimensions of affect|AROUSAL, VALANCE, POWER and EXPECTANCY|in spontaneous interactions. Our experiments are based on the AVEC 2012 challenge dataset. For lexical representations, we compare corpus-independent features based on psychological word norms of emotional dimensions, as well as corpus-dependent representations. We find that corpus-dependent bag of words approach with mutual information between word and emotion dimensions is by far the best representation. For the analysis of acoustics, we zero in on the question of granularity. We confirm on our corpus that utterance-level features are more predictive than word-level features. Further, we study more detailed representations in which the utterance is divided into regions of interest (ROI), each with separate representation. We introduce two ROI representations, which significantly outperform less informed approaches. In addition we show that acoustic models of emotion can be improved considerably by taking into account annotator agreement and training the model on smaller but reliable dataset. Finally we discuss the potential for improving prediction by combining the lexical and acoustic modalities. Simple fusion methods do not lead to consistent improvements over lexical classifiers alone but improve over acoustic models.
Flight-Test Evaluation of Flutter-Prediction Methods
NASA Technical Reports Server (NTRS)
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Artificial neural network modelling of uncertainty in gamma-ray spectrometry
NASA Astrophysics Data System (ADS)
Dragović, S.; Onjia, A.; Stanković, S.; Aničin, I.; Bačić, G.
2005-03-01
An artificial neural network (ANN) model for the prediction of measuring uncertainties in gamma-ray spectrometry was developed and optimized. A three-layer feed-forward ANN with back-propagation learning algorithm was used to model uncertainties of measurement of activity levels of eight radionuclides ( 226Ra, 238U, 235U, 40K, 232Th, 134Cs, 137Cs and 7Be) in soil samples as a function of measurement time. It was shown that the neural network provides useful data even from small experimental databases. The performance of the optimized neural network was found to be very good, with correlation coefficients ( R2) between measured and predicted uncertainties ranging from 0.9050 to 0.9915. The correlation coefficients did not significantly deteriorate when the network was tested on samples with greatly different uranium-to-thorium ( 238U/ 232Th) ratios. The differences between measured and predicted uncertainties were not influenced by the absolute values of uncertainties of measured radionuclide activities. Once the ANN is trained, it could be employed in analyzing soil samples regardless of the 238U/ 232Th ratio. It was concluded that a considerable saving in time could be obtained using the trained neural network model for predicting the measurement times needed to attain the desired statistical accuracy.
NASA Astrophysics Data System (ADS)
Lin, Xianke; Lu, Wei
2017-07-01
This paper proposes a model that enables consideration of the realistic anisotropic environment surrounding an active material particle by incorporating both diffusion and migration of lithium ions and electrons in the particle. This model makes it possible to quantitatively evaluate effects such as fracture on capacity degradation. In contrast, the conventional model assumes isotropic environment and only considers diffusion in the active particle, which cannot capture the effect of fracture since it would predict results contradictory to experimental observations. With the developed model we have investigated the effects of active material electronic conductivity, particle size, and State of Charge (SOC) swing window when fracture exists. The study shows that the low electronic conductivity of active material has a significant impact on the lithium ion pattern. Fracture increases the resistance for electron transport and therefore reduces lithium intercalation/deintercalation. Particle size plays an important role in lithium ion transport. Smaller particle size is preferable for mitigating capacity loss when fracture happens. The study also shows that operating at high SOC reduces the impact of fracture.
On the prediction of the Free Core Nutation
NASA Astrophysics Data System (ADS)
Belda Palazón, Santiago; Ferrándiz, José M.; Heinkelmann, Robert; Nilsson, Tobias; Schuh, Harald; Modiri, Sadegh
2017-04-01
Consideration of the Free Core Nutation (FCN) model is obliged for improved modelling of the Celestial Pole Offsets (CPO), since it is the major source of inaccuracy or unexplained time variability with respect to the current IAU2000 nutation theory. FCN is excited from various geophysical sources and thus it cannot be known until it is inferred from observations. However, given that the variations of the FCN signal are slow and seldom abrupt, we examine whether the availability of new FCN empirical models (i.e., Malkin 2007; Krásná et al. 2013; Belda et al. 2016) can be exploited to make reasonably accurate predictions of the FCN signal before observing it. In this work we study CPO predictions for the FCN model provided by Belda et al. 2016, in which the amplitude coefficients were estimated by using a sliding window with a width of 400 days and with a minimal displacement between the subsequent fits (one-day step). Our results exhibit two significant features: (1) the prediction of the FCN signal can be done on the basis of its prior amplitudes with a mean error of about 30 microarcseconds per year, with an apparent linear trend; and (2) the Weighted Root Mean Square (wrms) of the differences between the CPO produced by the IERS (International Earth Rotation and Reference Systems Service) and our predicted FCN exhibit an exponential slow-growing pattern, with a wmrs close to 120 microarcseconds along several months. Therefore a substantial improvement with respect to the CPO operational predictions of the IERS Rapid Service/Prediction Centre can be achieved.
A system structure for predictive relations in penetration mechanics
NASA Astrophysics Data System (ADS)
Korjack, Thomas A.
1992-02-01
The availability of a software system yielding quick numerical models to predict ballistic behavior is a requisite for any research laboratory engaged in material behavior. What is especially true about accessibility of rapid prototyping for terminal impaction is the enhancement of a system structure which will direct the specific material and impact situation towards a specific predictive model. This is of particular importance when the ranges of validity are at stake and the pertinent constraints associated with the impact are unknown. Hence, a compilation of semiempirical predictive penetration relations for various physical phenomena has been organized into a data structure for the purpose of developing a knowledge-based decision aided expert system to predict the terminal ballistic behavior of projectiles and targets. The ranges of validity and constraints of operation of each model were examined and cast into a decision tree structure to include target type, target material, projectile types, projectile materials, attack configuration, and performance or damage measures. This decision system implements many penetration relations, identifies formulas that match user-given conditions, and displays the predictive relation coincident with the match in addition to a numerical solution. The physical regimes under consideration encompass the hydrodynamic, transitional, and solid; the targets are either semi-infinite or plate, and the projectiles include kinetic and chemical energy. A preliminary databases has been constructed to allow further development of inductive and deductive reasoning techniques applied to ballistic situations involving terminal mechanics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolokotroni, Maria; Bhuiyan, Saiful; Davies, Michael
2010-12-15
This paper describes a method for predicting air temperatures within the Urban Heat Island at discreet locations based on input data from one meteorological station for the time the prediction is required and historic measured air temperatures within the city. It uses London as a case-study to describe the method and its applications. The prediction model is based on Artificial Neural Network (ANN) modelling and it is termed the London Site Specific Air Temperature (LSSAT) predictor. The temporal and spatial validity of the model was tested using data measured 8 years later from the original dataset; it was found thatmore » site specific hourly air temperature prediction provides acceptable accuracy and improves considerably for average monthly values. It thus is a very reliable tool for use as part of the process of predicting heating and cooling loads for urban buildings. This is illustrated by the computation of Heating Degree Days (HDD) and Cooling Degree Hours (CDH) for a West-East Transect within London. The described method could be used for any city for which historic hourly air temperatures are available for a number of locations; for example air pollution measuring sites, common in many cities, typically measure air temperature on an hourly basis. (author)« less
Bayesian modeling of flexible cognitive control
Jiang, Jiefeng; Heller, Katherine; Egner, Tobias
2014-01-01
“Cognitive control” describes endogenous guidance of behavior in situations where routine stimulus-response associations are suboptimal for achieving a desired goal. The computational and neural mechanisms underlying this capacity remain poorly understood. We examine recent advances stemming from the application of a Bayesian learner perspective that provides optimal prediction for control processes. In reviewing the application of Bayesian models to cognitive control, we note that an important limitation in current models is a lack of a plausible mechanism for the flexible adjustment of control over conflict levels changing at varying temporal scales. We then show that flexible cognitive control can be achieved by a Bayesian model with a volatility-driven learning mechanism that modulates dynamically the relative dependence on recent and remote experiences in its prediction of future control demand. We conclude that the emergent Bayesian perspective on computational mechanisms of cognitive control holds considerable promise, especially if future studies can identify neural substrates of the variables encoded by these models, and determine the nature (Bayesian or otherwise) of their neural implementation. PMID:24929218
NASA Technical Reports Server (NTRS)
Graham, John B., Jr.
1958-01-01
Heat-transfer and pressure measurements were obtained from a flight test of a 1/18-scale model of the Titan intercontinental ballistic missile up to a Mach number of 3.86 and Reynolds number per foot of 23.5 x 10(exp 6) and are compared with the data of two previously tested 1/18-scale models. Boundary-layer transition was observed on the nose of the model. Van Driest's theory predicted heat-transfer coefficients reasonably well for the fully laminar flow but predictions made by Van Driest's theory for turbulent flow were considerably higher than the measurements when the skin was being heated. Comparison with the flight test of two similar models shows fair repeatability of the measurements for fully laminar or turbulent flow.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package.
Reid, Stephen; Tibshirani, Rob
2014-07-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso [Formula: see text] and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by.
Regularization Paths for Conditional Logistic Regression: The clogitL1 Package
Reid, Stephen; Tibshirani, Rob
2014-01-01
We apply the cyclic coordinate descent algorithm of Friedman, Hastie, and Tibshirani (2010) to the fitting of a conditional logistic regression model with lasso (ℓ1) and elastic net penalties. The sequential strong rules of Tibshirani, Bien, Hastie, Friedman, Taylor, Simon, and Tibshirani (2012) are also used in the algorithm and it is shown that these offer a considerable speed up over the standard coordinate descent algorithm with warm starts. Once implemented, the algorithm is used in simulation studies to compare the variable selection and prediction performance of the conditional logistic regression model against that of its unconditional (standard) counterpart. We find that the conditional model performs admirably on datasets drawn from a suitable conditional distribution, outperforming its unconditional counterpart at variable selection. The conditional model is also fit to a small real world dataset, demonstrating how we obtain regularization paths for the parameters of the model and how we apply cross validation for this method where natural unconditional prediction rules are hard to come by. PMID:26257587
A Shock-Refracted Acoustic Wave Model for the Prediction of Screech Amplitude in Supersonic Jets
NASA Technical Reports Server (NTRS)
Kandula, Max
2007-01-01
A physical model is proposed for the estimation of the screech amplitude in underexpanded supersonic jets. The model is based on the hypothesis that the interaction of a plane acoustic wave with stationary shock waves provides amplification of the transmitted acoustic wave upon traversing the shock. Powell's discrete source model for screech incorporating a stationary array of acoustic monopoles is extended to accommodate variable source strength. The proposed model reveals that the acoustic sources are of increasing strength with downstream distance. It is shown that the screech amplitude increases with the fuiiy expanded jet Mach number. Comparisons of predicted screech amplitude with available test data show satisfactory agreement. The effect of variable source strength on directivity of the fundamental (first harmonic, lowest frequency mode) and the second harmonic (overtone) is found to be unimportant with regard to the principal lobe (main or major lobe) of considerable relative strength, and is appreciable only in the secondary or minor lobes (of relatively weaker strength
Underwater noise modelling for environmental impact assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farcas, Adrian; Thompson, Paul M.; Merchant, Nathan D., E-mail: nathan.merchant@cefas.co.uk
Assessment of underwater noise is increasingly required by regulators of development projects in marine and freshwater habitats, and noise pollution can be a constraining factor in the consenting process. Noise levels arising from the proposed activity are modelled and the potential impact on species of interest within the affected area is then evaluated. Although there is considerable uncertainty in the relationship between noise levels and impacts on aquatic species, the science underlying noise modelling is well understood. Nevertheless, many environmental impact assessments (EIAs) do not reflect best practice, and stakeholders and decision makers in the EIA process are often unfamiliarmore » with the concepts and terminology that are integral to interpreting noise exposure predictions. In this paper, we review the process of underwater noise modelling and explore the factors affecting predictions of noise exposure. Finally, we illustrate the consequences of errors and uncertainties in noise modelling, and discuss future research needs to reduce uncertainty in noise assessments.« less
Prediction of ground effects on aircraft noise
NASA Technical Reports Server (NTRS)
Pao, S. P.; Wenzel, A. R.; Oncley, P. B.
1978-01-01
A unified method is recommended for predicting ground effects on noise. This method may be used in flyover noise predictions and in correcting static test-stand data to free-field conditions. The recommendation is based on a review of recent progress in the theory of ground effects and of the experimental evidence which supports this theory. It is shown that a surface wave must be included sometimes in the prediction method. Prediction equations are collected conveniently in a single section of the paper. Methods of measuring ground impedance and the resulting ground-impedance data are also reviewed because the recommended method is based on a locally reactive impedance boundary model. Current practice of estimating ground effects are reviewed and consideration is given to practical problems in applying the recommended method. These problems include finite frequency-band filters, finite source dimension, wind and temperature gradients, and signal incoherence.
Future distribution of tundra refugia in northern Alaska
Hope, Andrew G.; Waltari, Eric; Payer, David C.; Cook, Joseph A.; Talbot, Sandra L.
2013-01-01
Climate change in the Arctic is a growing concern for natural resource conservation and management as a result of accelerated warming and associated shifts in the distribution and abundance of northern species. We introduce a predictive framework for assessing the future extent of Arctic tundra and boreal biomes in northern Alaska. We use geo-referenced museum specimens to predict the velocity of distributional change into the next century and compare predicted tundra refugial areas with current land-use. The reliability of predicted distributions, including differences between fundamental and realized niches, for two groups of species is strengthened by fossils and genetic signatures of demographic shifts. Evolutionary responses to environmental change through the late Quaternary are generally consistent with past distribution models. Predicted future refugia overlap managed areas and indicate potential hotspots for tundra diversity. To effectively assess future refugia, variable responses among closely related species to climate change warrants careful consideration of both evolutionary and ecological histories.
Considerations for Reporting Finite Element Analysis Studies in Biomechanics
Erdemir, Ahmet; Guess, Trent M.; Halloran, Jason; Tadepalli, Srinivas C.; Morrison, Tina M.
2012-01-01
Simulation-based medicine and the development of complex computer models of biological structures is becoming ubiquitous for advancing biomedical engineering and clinical research. Finite element analysis (FEA) has been widely used in the last few decades to understand and predict biomechanical phenomena. Modeling and simulation approaches in biomechanics are highly interdisciplinary, involving novice and skilled developers in all areas of biomedical engineering and biology. While recent advances in model development and simulation platforms offer a wide range of tools to investigators, the decision making process during modeling and simulation has become more opaque. Hence, reliability of such models used for medical decision making and for driving multiscale analysis comes into question. Establishing guidelines for model development and dissemination is a daunting task, particularly with the complex and convoluted models used in FEA. Nonetheless, if better reporting can be established, researchers will have a better understanding of a model’s value and the potential for reusability through sharing will be bolstered. Thus, the goal of this document is to identify resources and considerate reporting parameters for FEA studies in biomechanics. These entail various levels of reporting parameters for model identification, model structure, simulation structure, verification, validation, and availability. While we recognize that it may not be possible to provide and detail all of the reporting considerations presented, it is possible to establish a level of confidence with selective use of these parameters. More detailed reporting, however, can establish an explicit outline of the decision-making process in simulation-based analysis for enhanced reproducibility, reusability, and sharing. PMID:22236526
Numerical simulation of multi-directional random wave transformation in a yacht port
NASA Astrophysics Data System (ADS)
Ji, Qiaoling; Dong, Sheng; Zhao, Xizeng; Zhang, Guowei
2012-09-01
This paper extends a prediction model for multi-directional random wave transformation based on an energy balance equation by Mase with the consideration of wave shoaling, refraction, diffraction, reflection and breaking. This numerical model is improved by 1) introducing Wen's frequency spectrum and Mitsuyasu's directional function, which are more suitable to the coastal area of China; 2) considering energy dissipation caused by bottom friction, which ensures more accurate results for large-scale and shallow water areas; 3) taking into account a non-linear dispersion relation. Predictions using the extended wave model are carried out to study the feasibility of constructing the Ai Hua yacht port in Qingdao, China, with a comparison between two port layouts in design. Wave fields inside the port for different incident wave directions, water levels and return periods are simulated, and then two kinds of parameters are calculated to evaluate the wave conditions for the two layouts. Analyses show that Layout I is better than Layout II. Calculation results also show that the harbor will be calm for different wave directions under the design water level. On the contrary, the wave conditions do not wholly meet the requirements of a yacht port for ship berthing under the extreme water level. For safety consideration, the elevation of the breakwater might need to be properly increased to prevent wave overtopping under such water level. The extended numerical simulation model may provide an effective approach to computing wave heights in a harbor.
Bayesian analysis of input uncertainty in hydrological modeling: 2. Application
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Kuczera, George; Franks, Stewart W.
2006-03-01
The Bayesian total error analysis (BATEA) methodology directly addresses both input and output errors in hydrological modeling, requiring the modeler to make explicit, rather than implicit, assumptions about the likely extent of data uncertainty. This study considers a BATEA assessment of two North American catchments: (1) French Broad River and (2) Potomac basins. It assesses the performance of the conceptual Variable Infiltration Capacity (VIC) model with and without accounting for input (precipitation) uncertainty. The results show the considerable effects of precipitation errors on the predicted hydrographs (especially the prediction limits) and on the calibrated parameters. In addition, the performance of BATEA in the presence of severe model errors is analyzed. While BATEA allows a very direct treatment of input uncertainty and yields some limited insight into model errors, it requires the specification of valid error models, which are currently poorly understood and require further work. Moreover, it leads to computationally challenging highly dimensional problems. For some types of models, including the VIC implemented using robust numerical methods, the computational cost of BATEA can be reduced using Newton-type methods.
The Impact of Truth Surrogate Variance on Quality Assessment/Assurance in Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2016-01-01
Minimum data volume requirements for wind tunnel testing are reviewed and shown to depend on error tolerance, response model complexity, random error variance in the measurement environment, and maximum acceptable levels of inference error risk. Distinctions are made between such related concepts as quality assurance and quality assessment in response surface modeling, as well as between precision and accuracy. Earlier research on the scaling of wind tunnel tests is extended to account for variance in the truth surrogates used at confirmation sites in the design space to validate proposed response models. A model adequacy metric is presented that represents the fraction of the design space within which model predictions can be expected to satisfy prescribed quality specifications. The impact of inference error on the assessment of response model residuals is reviewed. The number of sites where reasonably well-fitted response models actually predict inadequately is shown to be considerably less than the number of sites where residuals are out of tolerance. The significance of such inference error effects on common response model assessment strategies is examined.
NASA Astrophysics Data System (ADS)
Sadeghi, Javad; Khajehdezfuly, Amin; Esmaeili, Morteza; Poorveis, Davood
2016-07-01
Rail irregularity is one of the most significant load amplification factors in railway track systems. In this paper, the capability and effectiveness of the two main railway slab tracks modeling techniques in prediction of the influences of rail irregularities on the Wheel/Rail Dynamic Force (WRDF) were investigated. For this purpose, two 2D and 3D numerical models of vehicle/discontinuous slab track interaction were developed. The validation of the numerical models was made by comparing the results of the models with those obtained from comprehensive field tests carried out in this research. The effects of the harmonic and non-harmonic rail irregularities on the WRDF obtained from 3D and 2D models were investigated. The results indicate that the difference between WRDF obtained from 2D and 3D models is negligible when the irregularities on the right and left rails are the same. However, as the difference between irregularities of the right and left rails increases, the results obtained from 2D and 3D models are considerably different. The results indicate that 2D models have limitations in prediction of WRDF; that is, a 3D modeling technique is required to predict WRDF when there is uneven or non-harmonic irregularity with large amplitudes. The size and extent of the influences of rail irregularities on the wheel/rail forces were discussed leading to provide a better understanding of the rail-wheel contact behavior and the required techniques for predicting WRDF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkvord, Sigurd; Flatmark, Kjersti; Department of Cancer and Surgery, Norwegian Radium Hospital, Oslo University Hospital
2010-10-01
Purpose: Tumor response of rectal cancer to preoperative chemoradiotherapy (CRT) varies considerably. In experimental tumor models and clinical radiotherapy, activity of particular subsets of kinase signaling pathways seems to predict radiation response. This study aimed to determine whether tumor kinase activity profiles might predict tumor response to preoperative CRT in locally advanced rectal cancer (LARC). Methods and Materials: Sixty-seven LARC patients were treated with a CRT regimen consisting of radiotherapy, fluorouracil, and, where possible, oxaliplatin. Pretreatment tumor biopsy specimens were analyzed using microarrays with kinase substrates, and the resulting substrate phosphorylation patterns were correlated with tumor response to preoperative treatmentmore » as assessed by histomorphologic tumor regression grade (TRG). A predictive model for TRG scores from phosphosubstrate signatures was obtained by partial-least-squares discriminant analysis. Prediction performance was evaluated by leave-one-out cross-validation and use of an independent test set. Results: In the patient population, 73% and 15% were scored as good responders (TRG 1-2) or intermediate responders (TRG 3), whereas 12% were assessed as poor responders (TRG 4-5). In a subset of 7 poor responders and 12 good responders, treatment outcome was correctly predicted for 95%. Application of the prediction model on the remaining patient samples resulted in correct prediction for 85%. Phosphosubstrate signatures generated by poor-responding tumors indicated high kinase activity, which was inhibited by the kinase inhibitor sunitinib, and several discriminating phosphosubstrates represented proteins derived from signaling pathways implicated in radioresistance. Conclusions: Multiplex kinase activity profiling may identify functional biomarkers predictive of tumor response to preoperative CRT in LARC.« less
Amirabadizadeh, Alireza; Nezami, Hossein; Vaughn, Michael G; Nakhaee, Samaneh; Mehrpour, Omid
2018-05-12
Substance abuse exacts considerable social and health care burdens throughout the world. The aim of this study was to create a prediction model to better identify risk factors for drug use. A prospective cross-sectional study was conducted in South Khorasan Province, Iran. Of the total of 678 eligible subjects, 70% (n: 474) were randomly selected to provide a training set for constructing decision tree and multiple logistic regression (MLR) models. The remaining 30% (n: 204) were employed in a holdout sample to test the performance of the decision tree and MLR models. Predictive performance of different models was analyzed by the receiver operating characteristic (ROC) curve using the testing set. Independent variables were selected from demographic characteristics and history of drug use. For the decision tree model, the sensitivity and specificity for identifying people at risk for drug abuse were 66% and 75%, respectively, while the MLR model was somewhat less effective at 60% and 73%. Key independent variables in the analyses included first substance experience, age at first drug use, age, place of residence, history of cigarette use, and occupational and marital status. While study findings are exploratory and lack generalizability they do suggest that the decision tree model holds promise as an effective classification approach for identifying risk factors for drug use. Convergent with prior research in Western contexts is that age of drug use initiation was a critical factor predicting a substance use disorder.
Using Machine Learning in Adversarial Environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren Leon Davis
Intrusion/anomaly detection systems are among the first lines of cyber defense. Commonly, they either use signatures or machine learning (ML) to identify threats, but fail to account for sophisticated attackers trying to circumvent them. We propose to embed machine learning within a game theoretic framework that performs adversarial modeling, develops methods for optimizing operational response based on ML, and integrates the resulting optimization codebase into the existing ML infrastructure developed by the Hybrid LDRD. Our approach addresses three key shortcomings of ML in adversarial settings: 1) resulting classifiers are typically deterministic and, therefore, easy to reverse engineer; 2) ML approachesmore » only address the prediction problem, but do not prescribe how one should operationalize predictions, nor account for operational costs and constraints; and 3) ML approaches do not model attackers’ response and can be circumvented by sophisticated adversaries. The principal novelty of our approach is to construct an optimization framework that blends ML, operational considerations, and a model predicting attackers reaction, with the goal of computing optimal moving target defense. One important challenge is to construct a realistic model of an adversary that is tractable, yet realistic. We aim to advance the science of attacker modeling by considering game-theoretic methods, and by engaging experimental subjects with red teaming experience in trying to actively circumvent an intrusion detection system, and learning a predictive model of such circumvention activities. In addition, we will generate metrics to test that a particular model of an adversary is consistent with available data.« less
Comparative Study of Different Methods for the Prediction of Drug-Polymer Solubility.
Knopp, Matthias Manne; Tajber, Lidia; Tian, Yiwei; Olesen, Niels Erik; Jones, David S; Kozyra, Agnieszka; Löbmann, Korbinian; Paluch, Krzysztof; Brennan, Claire Marie; Holm, René; Healy, Anne Marie; Andrews, Gavin P; Rades, Thomas
2015-09-08
In this study, a comparison of different methods to predict drug-polymer solubility was carried out on binary systems consisting of five model drugs (paracetamol, chloramphenicol, celecoxib, indomethacin, and felodipine) and polyvinylpyrrolidone/vinyl acetate copolymers (PVP/VA) of different monomer weight ratios. The drug-polymer solubility at 25 °C was predicted using the Flory-Huggins model, from data obtained at elevated temperature using thermal analysis methods based on the recrystallization of a supersaturated amorphous solid dispersion and two variations of the melting point depression method. These predictions were compared with the solubility in the low molecular weight liquid analogues of the PVP/VA copolymer (N-vinylpyrrolidone and vinyl acetate). The predicted solubilities at 25 °C varied considerably depending on the method used. However, the three thermal analysis methods ranked the predicted solubilities in the same order, except for the felodipine-PVP system. Furthermore, the magnitude of the predicted solubilities from the recrystallization method and melting point depression method correlated well with the estimates based on the solubility in the liquid analogues, which suggests that this method can be used as an initial screening tool if a liquid analogue is available. The learnings of this important comparative study provided general guidance for the selection of the most suitable method(s) for the screening of drug-polymer solubility.
Tougas-Tellier, Marie-Andrée; Morin, Jean; Hatin, Daniel; Lavoie, Claude
2015-01-01
Climate change will likely affect flooding regimes, which have a large influence on the functioning of freshwater riparian wetlands. Low water levels predicted for several fluvial systems make wetlands especially vulnerable to the spread of invaders, such as the common reed (Phragmites australis), one of the most invasive species in North America. We developed a model to map the distribution of potential germination grounds of the common reed in freshwater wetlands of the St. Lawrence River (Québec, Canada) under current climate conditions and used this model to predict their future distribution under two climate change scenarios simulated for 2050. We gathered historical and recent (remote sensing) data on the distribution of common reed stands for model calibration and validation purposes, then determined the parameters controlling the species establishment by seed. A two-dimensional model and the identified parameters were used to simulate the current (2010) and future (2050) distribution of germination grounds. Common reed stands are not widespread along the St. Lawrence River (212 ha), but our model suggests that current climate conditions are already conducive to considerable further expansion (>16,000 ha). Climate change may also exacerbate the expansion, particularly if river water levels drop, which will expose large bare areas propitious to seed germination. This phenomenon may be particularly important in one sector of the river, where existing common reed stands could increase their areas by a factor of 100, potentially creating the most extensive reedbed complex in North America. After colonizing salt and brackishwater marshes, the common reed could considerably expand into the freshwater marshes of North America which cover several million hectares. The effects of common reed expansion on biodiversity are difficult to predict, but likely to be highly deleterious given the competitiveness of the invader and the biological richness of freshwater wetlands. PMID:26380675
Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.
Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J
2016-01-01
Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability.
Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes
Yates, Katherine L.; Mellin, Camille; Caley, M. Julian; Radford, Ben T.; Meeuwig, Jessica J.
2016-01-01
Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are not fully captured by remotely sensed data. As such, the use of remotely sensed data to model biodiversity represents a compromise between model performance and data availability. PMID:27333202
Cloud Optical Depths and Liquid Water Paths at the NSA CART
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doran, J C.; Barnard, James C.; Zhong, Shiyuan
2000-03-14
Cloud optical depths have been measured using multifilter rotating shadowband radiometers (MFRSRs) at Barrow and Atqasuk, and liquid water paths have been measured at Barrow using a microwave radiometer (MWR) during the warm season (June-September) in 1999. Comparisons have been made between these quantities and the corresponding ones determined from the ECMWF GCM. Hour-by-hour comparisons of cloud optical depths show considerable scatter. The scatter is reduced, but is still substantial, when the averaging period is increased to ''daily'' averages, i.e., the time period each day over which the MFRSR can make measurements. This period varied between 18 hours in Junemore » and 6 hours in September. Preliminary results indicate that, for measured cloud optical depths less than approximately 25, the ECMWF has a low bias in its predictions, consistent with a low bias in predicted liquid water path. Based on a more limited set of data, the optical depths at Atqasuk were found to be generally lower than those at Barrow, a trend at least qualitatively captured by the ECMWF model. Analyses to identify the cause of the biases and the considerable scatter in the predictions are continuing.« less
Brik, Mikhail G; Suchocki, Andrzej; Kamińska, Agata
2014-05-19
A thorough consideration of the relation between the lattice parameters of 185 binary and ternary spinel compounds, on one side, and ionic radii and electronegativities of the constituting ions, on the other side, allowed for establishing a simple empirical model and finding its linear equation, which links together the above-mentioned quantities. The derived equation gives good agreement between the experimental and modeled values of the lattice parameters in the considered group of spinels, with an average relative error of about 1% only. The proposed model was improved further by separate consideration of several groups of spinels, depending on the nature of the anion (oxygen, sulfur, selenium/tellurium, nitrogen). The developed approach can be efficiently used for prediction of lattice constants for new isostructural materials. In particular, the lattice constants of new hypothetic spinels ZnRE2O4, CdRE2S4, CdRE2Se4 (RE = rare earth elements) are predicted in the present Article. In addition, the upper and lower limits for the variation of the ionic radii, electronegativities, and their certain combinations were established, which can be considered as stability criteria for the spinel compounds. The findings of the present Article offer a systematic overview of the structural properties of spinels and can serve as helpful guides for synthesis of new spinel compounds.
DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Rice, C Keith; Abdelaziz, Omar
2015-01-01
This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.
Neurocognitive Predictions of Performance.
1987-09-25
feedback (see Section 1V.A). Many of these results, along with pilot analyses of intracerebral recordings using a. primate model, cannot be explained by...are subject to considerable distortion by the intervening media of cerebrum, cerebro -spinal fluid, skull, and scalp. The result is a smeaxing or...parietal component is consistent with evidence from primates and humans for neuronal firing in motor and somatost.nsory cortices prior to motor responses
HESS Opinions "Should we apply bias correction to global and regional climate model data?"
NASA Astrophysics Data System (ADS)
Ehret, U.; Zehe, E.; Wulfmeyer, V.; Warrach-Sagi, K.; Liebert, J.
2012-04-01
Despite considerable progress in recent years, output of both Global and Regional Circulation Models is still afflicted with biases to a degree that precludes its direct use, especially in climate change impact studies. This is well known, and to overcome this problem bias correction (BC), i.e. the correction of model output towards observations in a post processing step for its subsequent application in climate change impact studies has now become a standard procedure. In this paper we argue that bias correction, which has a considerable influence on the results of impact studies, is not a valid procedure in the way it is currently used: it impairs the advantages of Circulation Models which are based on established physical laws by altering spatiotemporal field consistency, relations among variables and by violating conservation principles. Bias correction largely neglects feedback mechanisms and it is unclear whether bias correction methods are time-invariant under climate change conditions. Applying bias correction increases agreement of Climate Model output with observations in hind casts and hence narrows the uncertainty range of simulations and predictions without, however, providing a satisfactory physical justification. This is in most cases not transparent to the end user. We argue that this masks rather than reduces uncertainty, which may lead to avoidable forejudging of end users and decision makers. We present here a brief overview of state-of-the-art bias correction methods, discuss the related assumptions and implications, draw conclusions on the validity of bias correction and propose ways to cope with biased output of Circulation Models in the short term and how to reduce the bias in the long term. The most promising strategy for improved future Global and Regional Circulation Model simulations is the increase in model resolution to the convection-permitting scale in combination with ensemble predictions based on sophisticated approaches for ensemble perturbation. With this article, we advocate communicating the entire uncertainty range associated with climate change predictions openly and hope to stimulate a lively discussion on bias correction among the atmospheric and hydrological community and end users of climate change impact studies.
Terrestrial population models for ecological risk assessment: A state-of-the-art review
Emlen, J.M.
1989-01-01
Few attempts have been made to formulate models for predicting impacts of xenobiotic chemicals on wildlife populations. However, considerable effort has been invested in wildlife optimal exploitation models. Because death from intoxication has a similar effect on population dynamics as death by harvesting, these management models are applicable to ecological risk assessment. An underlying Leslie-matrix bookkeeping formulation is widely applicable to vertebrate wildlife populations. Unfortunately, however, the various submodels that track birth, death, and dispersal rates as functions of the physical, chemical, and biotic environment are by their nature almost inevitably highly species- and locale-specific. Short-term prediction of one-time chemical applications requires only information on mortality before and after contamination. In such cases a simple matrix formulation may be adequate for risk assessment. But generally, risk must be projected over periods of a generation or more. This precludes generic protocols for risk assessment and also the ready and inexpensive predictions of a chemical's influence on a given population. When designing and applying models for ecological risk assessment at the population level, the endpoints (output) of concern must be carefully and rigorously defined. The most easily accessible and appropriate endpoints are (1) pseudoextinction (the frequency or probability of a population falling below a prespecified density), and (2) temporal mean population density. Spatial and temporal extent of predicted changes must be clearly specified a priori to avoid apparent contradictions and confusion.
New higher-order Godunov code for modelling performance of two-stage light gas guns
NASA Technical Reports Server (NTRS)
Bogdanoff, D. W.; Miller, R. J.
1995-01-01
A new quasi-one-dimensional Godunov code for modeling two-stage light gas guns is described. The code is third-order accurate in space and second-order accurate in time. A very accurate Riemann solver is used. Friction and heat transfer to the tube wall for gases and dense media are modeled and a simple nonequilibrium turbulence model is used for gas flows. The code also models gunpowder burn in the first-stage breech. Realistic equations of state (EOS) are used for all media. The code was validated against exact solutions of Riemann's shock-tube problem, impact of dense media slabs at velocities up to 20 km/sec, flow through a supersonic convergent-divergent nozzle and burning of gunpowder in a closed bomb. Excellent validation results were obtained. The code was then used to predict the performance of two light gas guns (1.5 in. and 0.28 in.) in service at the Ames Research Center. The code predictions were compared with measured pressure histories in the powder chamber and pump tube and with measured piston and projectile velocities. Very good agreement between computational fluid dynamics (CFD) predictions and measurements was obtained. Actual powder-burn rates in the gun were found to be considerably higher (60-90 percent) than predicted by the manufacturer and the behavior of the piston upon yielding appears to differ greatly from that suggested by low-strain rate tests.
Measures, R.; Hicks, D. M.; Brasington, J.
2016-01-01
Abstract Numerical morphological modeling of braided rivers, using a physics‐based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth‐averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high‐flow event. Evaluation of model performance primarily focused upon using high‐resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach‐scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers. PMID:27708477
Williams, R D; Measures, R; Hicks, D M; Brasington, J
2016-08-01
Numerical morphological modeling of braided rivers, using a physics-based approach, is increasingly used as a technique to explore controls on river pattern and, from an applied perspective, to simulate the impact of channel modifications. This paper assesses a depth-averaged nonuniform sediment model (Delft3D) to predict the morphodynamics of a 2.5 km long reach of the braided Rees River, New Zealand, during a single high-flow event. Evaluation of model performance primarily focused upon using high-resolution Digital Elevation Models (DEMs) of Difference, derived from a fusion of terrestrial laser scanning and optical empirical bathymetric mapping, to compare observed and predicted patterns of erosion and deposition and reach-scale sediment budgets. For the calibrated model, this was supplemented with planform metrics (e.g., braiding intensity). Extensive sensitivity analysis of model functions and parameters was executed, including consideration of numerical scheme for bed load component calculations, hydraulics, bed composition, bed load transport and bed slope effects, bank erosion, and frequency of calculations. Total predicted volumes of erosion and deposition corresponded well to those observed. The difference between predicted and observed volumes of erosion was less than the factor of two that characterizes the accuracy of the Gaeuman et al. bed load transport formula. Grain size distributions were best represented using two φ intervals. For unsteady flows, results were sensitive to the morphological time scale factor. The approach of comparing observed and predicted morphological sediment budgets shows the value of using natural experiment data sets for model testing. Sensitivity results are transferable to guide Delft3D applications to other rivers.
Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi
2016-01-01
Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362
Correlation between length and tilt of lipid tails
NASA Astrophysics Data System (ADS)
Kopelevich, Dmitry I.; Nagle, John F.
2015-10-01
It is becoming recognized from simulations, and to a lesser extent from experiment, that the classical Helfrich-Canham membrane continuum mechanics model can be fruitfully enriched by the inclusion of molecular tilt, even in the fluid, chain disordered, biologically relevant phase of lipid bilayers. Enriched continuum theories then add a tilt modulus κθ to accompany the well recognized bending modulus κ. Different enrichment theories largely agree for many properties, but it has been noticed that there is considerable disagreement in one prediction; one theory postulates that the average length of the hydrocarbon chain tails increases strongly with increasing tilt and another predicts no increase. Our analysis of an all-atom simulation favors the latter theory, but it also shows that the overall tail length decreases slightly with increasing tilt. We show that this deviation from continuum theory can be reconciled by consideration of the average shape of the tails, which is a descriptor not obviously includable in continuum theory.
PARTS: Probabilistic Alignment for RNA joinT Secondary structure prediction
Harmanci, Arif Ozgun; Sharma, Gaurav; Mathews, David H.
2008-01-01
A novel method is presented for joint prediction of alignment and common secondary structures of two RNA sequences. The joint consideration of common secondary structures and alignment is accomplished by structural alignment over a search space defined by the newly introduced motif called matched helical regions. The matched helical region formulation generalizes previously employed constraints for structural alignment and thereby better accommodates the structural variability within RNA families. A probabilistic model based on pseudo free energies obtained from precomputed base pairing and alignment probabilities is utilized for scoring structural alignments. Maximum a posteriori (MAP) common secondary structures, sequence alignment and joint posterior probabilities of base pairing are obtained from the model via a dynamic programming algorithm called PARTS. The advantage of the more general structural alignment of PARTS is seen in secondary structure predictions for the RNase P family. For this family, the PARTS MAP predictions of secondary structures and alignment perform significantly better than prior methods that utilize a more restrictive structural alignment model. For the tRNA and 5S rRNA families, the richer structural alignment model of PARTS does not offer a benefit and the method therefore performs comparably with existing alternatives. For all RNA families studied, the posterior probability estimates obtained from PARTS offer an improvement over posterior probability estimates from a single sequence prediction. When considering the base pairings predicted over a threshold value of confidence, the combination of sensitivity and positive predictive value is superior for PARTS than for the single sequence prediction. PARTS source code is available for download under the GNU public license at http://rna.urmc.rochester.edu. PMID:18304945
Seasonal prediction of East Asian summer rainfall using a multi-model ensemble system
NASA Astrophysics Data System (ADS)
Ahn, Joong-Bae; Lee, Doo-Young; Yoo, Jin‑Ho
2015-04-01
Using the retrospective forecasts of seven state-of-the-art coupled models and their multi-model ensemble (MME) for boreal summers, the prediction skills of climate models in the western tropical Pacific (WTP) and East Asian region are assessed. The prediction of summer rainfall anomalies in East Asia is difficult, while the WTP has a strong correlation between model prediction and observation. We focus on developing a new approach to further enhance the seasonal prediction skill for summer rainfall in East Asia and investigate the influence of convective activity in the WTP on East Asian summer rainfall. By analyzing the characteristics of the WTP convection, two distinct patterns associated with El Niño-Southern Oscillation developing and decaying modes are identified. Based on the multiple linear regression method, the East Asia Rainfall Index (EARI) is developed by using the interannual variability of the normalized Maritime continent-WTP Indices (MPIs), as potentially useful predictors for rainfall prediction over East Asia, obtained from the above two main patterns. For East Asian summer rainfall, the EARI has superior performance to the East Asia summer monsoon index or each MPI. Therefore, the regressed rainfall from EARI also shows a strong relationship with the observed East Asian summer rainfall pattern. In addition, we evaluate the prediction skill of the East Asia reconstructed rainfall obtained by hybrid dynamical-statistical approach using the cross-validated EARI from the individual models and their MME. The results show that the rainfalls reconstructed from simulations capture the general features of observed precipitation in East Asia quite well. This study convincingly demonstrates that rainfall prediction skill is considerably improved by using a hybrid dynamical-statistical approach compared to the dynamical forecast alone. Acknowledgements This work was carried out with the support of Rural Development Administration Cooperative Research Program for Agriculture Science and Technology Development under grant project PJ009353 and Korea Meteorological Administration Research and Development Program under grant CATER 2012-3100, Republic of Korea.
Predictability of the 1997 and 1998 South Asian Summer Monsoons
NASA Technical Reports Server (NTRS)
Schubert, Siegfred D.; Wu, Man Li
2000-01-01
The predictability of the 1997 and 1998 south Asian summer monsoon winds is examined from an ensemble of 10 Atmospheric General Circulation Model (AGCM) simulations with prescribed sea surface temperatures (SSTs) and soil moisture, The simulations are started in September 1996 so that they have lost all memory of the atmospheric initial conditions for the periods of interest. The model simulations show that the 1998 monsoon is considerably more predictable than the 1997 monsoon. During May and June of 1998 the predictability of the low-level wind anomalies is largely associated with a local response to anomalously warm Indian Ocean SSTs. Predictability increases late in the season (July and August) as a result of the strengthening of the anomalous Walker circulation and the associated development of easterly low level wind anomalies that extend westward across India and the Arabian Sea. During these months the model is also the most skillful with the observations showing a similar late-season westward extension of the easterly CD wind anomalies. The model shows little predictability or skill in the low level winds over southeast Asia during, 1997. Predictable wind anomalies do occur over the western Indian Ocean and Indonesia, however, over the Indian Ocean they are a response to SST anomalies that were wind driven and they show no skill. The reduced predictability in the low level winds during 1997 appears to be the result of a weaker (compared with 1998) simulated anomalous Walker circulation, while the reduced skill is associated with pronounced intraseasonal activity that is not well captured by the model. Remarkably, the model does produce an ensemble mean Madden-Julian Oscillation (MJO) response that is approximately in phase with (though weaker than) the observed MJ0 anomalies. This is consistent with the idea that SST coupling may play an important role in the MJO.
NASA Astrophysics Data System (ADS)
Edrisi, Siroos; Bidhendi, Norollah Kasiri; Haghighi, Maryam
2017-01-01
Effective thermal conductivity of the porous media was modeled based on a self-consistent method. This model estimates the heat transfer between insulator surface and air cavities accurately. In this method, the pore size and shape, the temperature gradient and other thermodynamic properties of the fluid was taken into consideration. The results are validated by experimental data for fire bricks used in cracking furnaces at the olefin plant of Maroon petrochemical complexes well as data published for polyurethane foam (synthetic polymers) IPTM and IPM. The model predictions present a good agreement against experimental data with thermal conductivity deviating <1 %.
Ocean modelling on the CYBER 205 at GFDL
NASA Technical Reports Server (NTRS)
Cox, M.
1984-01-01
At the Geophysical Fluid Dynamics Laboratory, research is carried out for the purpose of understanding various aspects of climate, such as its variability, predictability, stability and sensitivity. The atmosphere and oceans are modelled mathematically and their phenomenology studied by computer simulation methods. The present state-of-the-art in the computer simulation of large scale oceans on the CYBER 205 is discussed. While atmospheric modelling differs in some aspects, the basic approach used is similar. The equations of the ocean model are presented along with a short description of the numerical techniques used to find their solution. Computational considerations and a typical solution are presented in section 4.
NASA Astrophysics Data System (ADS)
Robinson, S.; Julyan, P. J.; Hastings, D. L.; Zweit, J.
2004-12-01
The key performance measures of resolution, count rate, sensitivity and scatter fraction are predicted for a dedicated BGO block detector patient PET scanner (GE Advance) in 2D mode for imaging with the non-pure positron-emitting radionuclides 124I, 55Co, 61Cu, 62Cu, 64Cu and 76Br. Model calculations including parameters of the scanner, decay characteristics of the radionuclides and measured parameters in imaging the pure positron-emitter 18F are used to predict performance according to the National Electrical Manufacturers Association (NEMA) NU 2-1994 criteria. Predictions are tested with measurements made using 124I and show that, in comparison with 18F, resolution degrades by 1.2 mm radially and tangentially throughout the field-of-view (prediction: 1.2 mm), count-rate performance reduces considerably and in close accordance with calculations, sensitivity decreases to 23.4% of that with 18F (prediction: 22.9%) and measured scatter fraction increases from 10.0% to 14.5% (prediction: 14.7%). Model predictions are expected to be equally accurate for other radionuclides and may be extended to similar scanners. Although performance is worse with 124I than 18F, imaging is not precluded in 2D mode. The viability of 124I imaging and performance in a clinical context compared with 18F is illustrated with images of a patient with recurrent thyroid cancer acquired using both [124I]-sodium iodide and [18F]-2-fluoro-2-deoxyglucose.
Thermal impact of magmatism in subduction zones
NASA Astrophysics Data System (ADS)
Rees Jones, David W.; Katz, Richard F.; Tian, Meng; Rudge, John F.
2018-01-01
Magmatism in subduction zones builds continental crust and causes most of Earth's subaerial volcanism. The production rate and composition of magmas are controlled by the thermal structure of subduction zones. A range of geochemical and heat flow evidence has recently converged to indicate that subduction zones are hotter at lithospheric depths beneath the arc than predicted by canonical thermomechanical models, which neglect magmatism. We show that this discrepancy can be resolved by consideration of the heat transported by magma. In our one- and two-dimensional numerical models and scaling analysis, magmatic transport of sensible and latent heat locally alters the thermal structure of canonical models by ∼300 K, increasing predicted surface heat flow and mid-lithospheric temperatures to observed values. We find the advection of sensible heat to be larger than the deposition of latent heat. Based on these results we conclude that thermal transport by magma migration affects the chemistry and the location of arc volcanoes.
NASA Astrophysics Data System (ADS)
Whittaker, Peter; Wilson, Catherine A. M. E.; Aberle, Jochen
2015-09-01
An improved model to describe the drag and reconfiguration of flexible riparian vegetation is proposed. The key improvement over previous models is the use of a refined 'vegetative' Cauchy number to explicitly determine the magnitude and rate of the vegetation's reconfiguration. After being derived from dimensional consideration, the model is applied to two experimental data sets. The first contains high-resolution drag force and physical property measurements for twenty-one foliated and defoliated full-scale trees, including specimens of Alnus glutinosa, Populus nigra and Salix alba. The second data set is independent and of a different scale, consisting of drag force and physical property measurements for natural and artificial branches of willow and poplar, under partially and fully submerged flow conditions. Good agreement between the measured and predicted drag forces is observed for both data sets, especially when compared to a more typical 'rigid' approximation, where the effects of reconfiguration are neglected.
Bubble Formation and Detachment in Reduced Gravity Under the Influence of Electric Fields
NASA Technical Reports Server (NTRS)
Herman, Cila; Iacona, Estelle; Chang, Shinan
2002-01-01
The objective of the study is to investigate the behavior of individual air bubbles injected through an orifice into an electrically insulating liquid under the influence of a static electric field. Both uniform and nonuniform electric field configurations were considered. Bubble formation and detachment were recorded and visualized in reduced gravity (corresponding to gravity levels on Mars, on the Moon as well as microgravity) using a high-speed video camera. Bubble volume, dimensions and contact angle at detachment were measured. In addition to the experimental studies, a simple model, predicting bubble characteristics at detachment was developed. The model, based on thermodynamic considerations, accounts for the level of gravity as well as the magnitude of the uniform electric field. Measured data and model predictions show good agreement and indicate that the level of gravity and the electric field magnitude significantly affect bubble shape, volume and dimensions.
NASA Astrophysics Data System (ADS)
Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann
2016-05-01
The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.
A time domain frequency-selective multivariate Granger causality approach.
Leistritz, Lutz; Witte, Herbert
2016-08-01
The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.
[Mathematic modeling for prediction of waning immunity and timing of booster doses].
Matuo, Fujio; Okada, Kenji
2008-10-01
Under an environment that a vaccination rate is low and an infectious disease is prevalent, it is thought that most vaccinee got additional immunity by natural infection. On the other hand, in the area where the incidence of disease has been reduced by high rate vaccination, it is also decreased the chance of additional immunity by natural infection. Therefore susceptible individuals are increased because of the waning immunity. In the community where a vaccination rate is high, it may be necessary to consider the booster vaccination for adolescent and adult even if one completed the primary vaccinations. It may also be important to explore the timing of booster dose. In this paper, we attempt to give a comprehensive explanation of mathematical model for predicting the antibody duration, and we introduce the role of mathematical model on a consideration to the need and timing of booster doses after the primary series.
Development of theoretical models of integrated millimeter wave antennas
NASA Technical Reports Server (NTRS)
Yngvesson, K. Sigfrid; Schaubert, Daniel H.
1991-01-01
Extensive radiation patterns for Linear Tapered Slot Antenna (LTSA) Single Elements are presented. The directivity of LTSA elements is predicted correctly by taking the cross polarized pattern into account. A moment method program predicts radiation patterns for air LTSAs with excellent agreement with experimental data. A moment method program was also developed for the task LTSA Array Modeling. Computations performed with this program are in excellent agreement with published results for dipole and monopole arrays, and with waveguide simulator experiments, for more complicated structures. Empirical modeling of LTSA arrays demonstrated that the maximum theoretical element gain can be obtained. Formulations were also developed for calculating the aperture efficiency of LTSA arrays used in reflector systems. It was shown that LTSA arrays used in multibeam systems have a considerable advantage in terms of higher packing density, compared with waveguide feeds. Conversion loss of 10 dB was demonstrated at 35 GHz.
Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony
2018-01-01
This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.
Workaholism and relationship quality: a spillover-crossover perspective.
Bakker, Arnold B; Demerouti, Evangelia; Burke, Ronald
2009-01-01
This study of 168 dual-earner couples examined the relationship between workaholism and relationship satisfaction. More specifically, on the basis of the literature, it was hypothesized that workaholism is positively related to work-family conflict. In addition, the authors predicted that workaholism is related to reduced support provided to the partner, through work-family conflict, and that individuals who receive considerable support from their partners are more satisfied with their relationship. Finally, the authors hypothesized direct crossover of relationship satisfaction between partners. The results of structural equation modeling analyses using the matched responses of both partners supported these hypotheses. Moreover, in line with predictions, the authors found that gender did not affect the strength of the relationships in the proposed model. The authors discuss workplace interventions as possible ways to help workaholics and their partners.
Dynamic Characteristics of Simple Cylindrical Hydraulic Engine Mount Utilizing Air Compressibility
NASA Astrophysics Data System (ADS)
Nakahara, Kazunari; Nakagawa, Noritoshi; Ohta, Katsutoshi
A cylindrical hydraulic engine mount with simple construction has been developed. This engine mount has a sub chamber formed by utilizing air compressibility without a diaphragm. A mathematical model of the mount is presented to predict non-linear dynamic characteristics in consideration of the effect of the excitation amplitude on the storage stiffness and loss factor. The mathematical model predicts experimental results well for the frequency responses of the storage stiffness and loss factor over the frequency range of 5 Hz to 60Hz. The effect of air volume and internal pressure on the dynamic characteristics is clarified by the analysis and dynamic characterization testing. The effectiveness of the cylindrical hydraulic engine mount on the reduction of engine shake is demonstrated for riding comfort through on-vehicle testing with a chassis dynamometer.
Nonequilibrium radiation and chemistry models for aerocapture vehicle flowfields, volume 2
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1991-01-01
A technique was developed for predicting the character and magnitude of the shock wave precursor ahead of an entry vehicle and the effect of this precursor on the vehicle flow field was ascertained. A computational method and program were developed to properly model this precursor. Expressions were developed for the mass production rates of each species due to photodissociation and photoionization reactions. Also, consideration was given to the absorption and emission of radiation and how it affects the energy in each of the energy modes of both the atomic and diatomic species. A series of parametric studies were conducted covering a range of entry conditions in order to predict the effects of the precursor on the shock layer and the radiative heat transfer to the body.
Bartels, Susanne; Márki, Ferenc; Müller, Uwe
2015-12-15
Air traffic has increased for the past decades and is forecasted to continue to grow. Noise due to current airport operations can impair the physical and psychological well-being of airport residents. The field study investigated aircraft noise-induced short-term (i.e., within hourly intervals) annoyance in local residents near a busy airport. We aimed at examining the contribution of acoustical and non-acoustical factors to the annoyance rating. Across four days from getting up till going to bed, 55 residents near Cologne/Bonn Airport (M=46years, SD=14years, 34 female) rated their annoyance due to aircraft noise at hourly intervals. For each participant and each hour, 26 noise metrics from outdoor measurements and further 6 individualized metrics that took into account the sound attenuation due to each person's whereabouts in and around their homes were obtained. Non-acoustical variables were differentiated into situational factors (time of day, performed activity during past hour, day of the week) and personal factors (e.g., sensitivity to noise, attitudes, domestic noise insulation). Generalized Estimation Equations were applied for the development of a prediction model for annoyance. Acoustical factors explained only a small proportion (13.7%) of the variance in the annoyance ratings. The number of fly-overs predicted annoyance better than did equivalent and maximum sound pressure levels. The proportion of explained variance in annoyance rose considerably (to 27.6%) when individualized noise metrics as well as situational and personal variables were included in the prediction model. Consideration of noise metrics related to the number of fly-overs and individual adjustment of noise metrics can improve the prediction of short-term annoyance compared to models using equivalent outdoor levels only. Non-acoustical factors have remarkable impact not only on long-term annoyance as shown before but also on short-term annoyance judged in the home environment. Copyright © 2015 Elsevier B.V. All rights reserved.
Bevelhimer, Mark S.; DeRolph, Christopher R.; Schramm, Michael P.
2016-06-06
Uncertainty about environmental mitigation needs at existing and proposed hydropower projects makes it difficult for stakeholders to minimize environmental impacts. Hydropower developers and operators desire tools to better anticipate mitigation requirements, while natural resource managers and regulators need tools to evaluate different mitigation scenarios and order effective mitigation. Here we sought to examine the feasibility of using a suite of multidisciplinary explanatory variables within a spatially explicit modeling framework to fit predictive models for future environmental mitigation requirements at hydropower projects across the conterminous U.S. Using a database comprised of mitigation requirements from more than 300 hydropower project licenses, wemore » were able to successfully fit models for nearly 50 types of environmental mitigation and to apply the predictive models to a set of more than 500 non-powered dams identified as having hydropower potential. The results demonstrate that mitigation requirements have been a result of a range of factors, from biological and hydrological to political and cultural. Furthermore, project developers can use these models to inform cost projections and design considerations, while regulators can use the models to more quickly identify likely environmental issues and potential solutions, hopefully resulting in more timely and more effective decisions on environmental mitigation.« less
DeRolph, Christopher R; Schramm, Michael P; Bevelhimer, Mark S
2016-10-01
Uncertainty about environmental mitigation needs at existing and proposed hydropower projects makes it difficult for stakeholders to minimize environmental impacts. Hydropower developers and operators desire tools to better anticipate mitigation requirements, while natural resource managers and regulators need tools to evaluate different mitigation scenarios and order effective mitigation. Here we sought to examine the feasibility of using a suite of multi-faceted explanatory variables within a spatially explicit modeling framework to fit predictive models for future environmental mitigation requirements at hydropower projects across the conterminous U.S. Using a database comprised of mitigation requirements from more than 300 hydropower project licenses, we were able to successfully fit models for nearly 50 types of environmental mitigation and to apply the predictive models to a set of more than 500 non-powered dams identified as having hydropower potential. The results demonstrate that mitigation requirements are functions of a range of factors, from biophysical to socio-political. Project developers can use these models to inform cost projections and design considerations, while regulators can use the models to more quickly identify likely environmental issues and potential solutions, hopefully resulting in more timely and more effective decisions on environmental mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bevelhimer, Mark S.; DeRolph, Christopher R.; Schramm, Michael P.
Uncertainty about environmental mitigation needs at existing and proposed hydropower projects makes it difficult for stakeholders to minimize environmental impacts. Hydropower developers and operators desire tools to better anticipate mitigation requirements, while natural resource managers and regulators need tools to evaluate different mitigation scenarios and order effective mitigation. Here we sought to examine the feasibility of using a suite of multidisciplinary explanatory variables within a spatially explicit modeling framework to fit predictive models for future environmental mitigation requirements at hydropower projects across the conterminous U.S. Using a database comprised of mitigation requirements from more than 300 hydropower project licenses, wemore » were able to successfully fit models for nearly 50 types of environmental mitigation and to apply the predictive models to a set of more than 500 non-powered dams identified as having hydropower potential. The results demonstrate that mitigation requirements have been a result of a range of factors, from biological and hydrological to political and cultural. Furthermore, project developers can use these models to inform cost projections and design considerations, while regulators can use the models to more quickly identify likely environmental issues and potential solutions, hopefully resulting in more timely and more effective decisions on environmental mitigation.« less
Murphy, Ryan J.; Liu, Hao; Iordachita, Iulian I.; Armand, Mehran
2017-01-01
Dexterous continuum manipulators (DCMs) have been widely adopted for minimally- and less-invasive surgery. During the operation, these DCMs interact with surrounding anatomy actively or passively. The interaction force will inevitably affect the tip position and shape of DCMs, leading to potentially inaccurate control near critical anatomy. In this paper, we demonstrated a 2D mechanical model for a tendon actuated, notched DCM with compliant joints. The model predicted deformation of the DCM accurately in the presence of tendon force, friction force, and external force. A partition approach was proposed to describe the DCM as a series of interconnected rigid and flexible links. Beam mechanics, taking into consideration tendon interaction and external force on the tip and the body, was applied to obtain the deformation of each flexible link of the DCM. The model results were compared with experiments for free bending as well as bending in the presence of external forces acting at either the tip or body of the DCM. The overall mean error of tip position between model predictions and all of the experimental results was 0.62±0.41mm. The results suggest that the proposed model can effectively predict the shape of the DCM. PMID:28989273