Benchmarking test of empirical root water uptake models
NASA Astrophysics Data System (ADS)
dos Santos, Marcos Alex; de Jong van Lier, Quirijn; van Dam, Jos C.; Freire Bezerra, Andre Herman
2017-01-01
Detailed physical models describing root water uptake (RWU) are an important tool for the prediction of RWU and crop transpiration, but the hydraulic parameters involved are hardly ever available, making them less attractive for many studies. Empirical models are more readily used because of their simplicity and the associated lower data requirements. The purpose of this study is to evaluate the capability of some empirical models to mimic the RWU distribution under varying environmental conditions predicted from numerical simulations with a detailed physical model. A review of some empirical models used as sub-models in ecohydrological models is presented, and alternative empirical RWU models are proposed. All these empirical models are analogous to the standard Feddes model, but differ in how RWU is partitioned over depth or how the transpiration reduction function is defined. The parameters of the empirical models are determined by inverse modelling of simulated depth-dependent RWU. The performance of the empirical models and their optimized empirical parameters depends on the scenario. The standard empirical Feddes model only performs well in scenarios with low root length density R, i.e. for scenarios with low RWU compensation
. For medium and high R, the Feddes RWU model cannot mimic properly the root uptake dynamics as predicted by the physical model. The Jarvis RWU model in combination with the Feddes reduction function (JMf) only provides good predictions for low and medium R scenarios. For high R, it cannot mimic the uptake patterns predicted by the physical model. Incorporating a newly proposed reduction function into the Jarvis model improved RWU predictions. Regarding the ability of the models to predict plant transpiration, all models accounting for compensation show good performance. The Akaike information criterion (AIC) indicates that the Jarvis (2010) model (JMII), with no empirical parameters to be estimated, is the best model
. The proposed models are better in predicting RWU patterns similar to the physical model. The statistical indices point to them as the best alternatives for mimicking RWU predictions of the physical model.
Climate Prediction for Brazil's Nordeste: Performance of Empirical and Numerical Modeling Methods.
NASA Astrophysics Data System (ADS)
Moura, Antonio Divino; Hastenrath, Stefan
2004-07-01
Comparisons of performance of climate forecast methods require consistency in the predictand and a long common reference period. For Brazil's Nordeste, empirical methods developed at the University of Wisconsin use preseason (October January) rainfall and January indices of the fields of meridional wind component and sea surface temperature (SST) in the tropical Atlantic and the equatorial Pacific as input to stepwise multiple regression and neural networking. These are used to predict the March June rainfall at a network of 27 stations. An experiment at the International Research Institute for Climate Prediction, Columbia University, with a numerical model (ECHAM4.5) used global SST information through February to predict the March June rainfall at three grid points in the Nordeste. The predictands for the empirical and numerical model forecasts are correlated at +0.96, and the period common to the independent portion of record of the empirical prediction and the numerical modeling is 1968 99. Over this period, predicted versus observed rainfall are evaluated in terms of correlation, root-mean-square error, absolute error, and bias. Performance is high for both approaches. Numerical modeling produces a correlation of +0.68, moderate errors, and strong negative bias. For the empirical methods, errors and bias are small, and correlations of +0.73 and +0.82 are reached between predicted and observed rainfall.
Do We Know the Actual Magnetopause Position for Typical Solar Wind Conditions?
NASA Technical Reports Server (NTRS)
Samsonov, A. A.; Gordeev, E.; Tsyganenko, N. A.; Safrankova, J.; Nemecek, Z.; Simunek, J.; Sibeck, D. G.; Toth, G.; Merkin, V. G.; Raeder, J.
2016-01-01
We compare predicted magnetopause positions at the subsolar point and four reference points in the terminator plane obtained from several empirical and numerical MHD (magnetohydrodynamics) models. Empirical models using various sets of magnetopause crossings and making different assumptions about the magnetopause shape predict significantly different magnetopause positions (with a scatter greater than 1 Earth radius (R (sub E)) even at the subsolar point. Axisymmetric magnetopause models cannot reproduce the cusp indentations or the changes related to the dipole tilt effect, and most of them predict the magnetopause closer to the Earth than non axisymmetric models for typical solar wind conditions and zero tilt angle. Predictions of two global non axisymmetric models do not match each other, and the models need additional verification. MHD models often predict the magnetopause closer to the Earth than the non axisymmetric empirical models, but the predictions of MHD simulations may need corrections for the ring current effect and decreases of the solar wind pressure that occur in the foreshock. Comparing MHD models in which the ring current magnetic field is taken into account with the empirical Lin et al. model, we find that the differences in the reference point positions predicted by these models are relatively small for B (sub z) equals 0 (note: B (sub z) is when the Earth's magnetic field points north versus Sun's magnetic field pointing south). Therefore, we assume that these predictions indicate the actual magnetopause position, but future investigations are still needed.
Selection of fire spread model for Russian fire behavior prediction system
Alexandra V. Volokitina; Kevin C. Ryan; Tatiana M. Sofronova; Mark A. Sofronov
2010-01-01
Mathematical modeling of fire behavior prediction is only possible if the models are supplied with an information database that provides spatially explicit input parameters for modeled area. Mathematical models can be of three kinds: 1) physical; 2) empirical; and 3) quasi-empirical (Sullivan, 2009). Physical models (Grishin, 1992) are of academic interest only because...
Component-based model to predict aerodynamic noise from high-speed train pantographs
NASA Astrophysics Data System (ADS)
Latorre Iglesias, E.; Thompson, D. J.; Smith, M. G.
2017-04-01
At typical speeds of modern high-speed trains the aerodynamic noise produced by the airflow over the pantograph is a significant source of noise. Although numerical models can be used to predict this they are still very computationally intensive. A semi-empirical component-based prediction model is proposed to predict the aerodynamic noise from train pantographs. The pantograph is approximated as an assembly of cylinders and bars with particular cross-sections. An empirical database is used to obtain the coefficients of the model to account for various factors: incident flow speed, diameter, cross-sectional shape, yaw angle, rounded edges, length-to-width ratio, incoming turbulence and directivity. The overall noise from the pantograph is obtained as the incoherent sum of the predicted noise from the different pantograph struts. The model is validated using available wind tunnel noise measurements of two full-size pantographs. The results show the potential of the semi-empirical model to be used as a rapid tool to predict aerodynamic noise from train pantographs.
Interest is increasing in using biological community data to provide information on the specific types of anthropogenic influences impacting streams. We built empirical models that predict the level of six different types of stress with fish and benthic macroinvertebrate data as...
MERGANSER - An Empirical Model to Predict Fish and Loon Mercury in New England Lakes
MERGANSER (MERcury Geo-spatial AssessmeNtS for the New England Region) is an empirical least-squares multiple regression model using mercury (Hg) deposition and readily obtainable lake and watershed features to predict fish (fillet) and common loon (blood) Hg in New England lakes...
On the predictability of land surface fluxes from meteorological variables
NASA Astrophysics Data System (ADS)
Haughton, Ned; Abramowitz, Gab; Pitman, Andy J.
2018-01-01
Previous research has shown that land surface models (LSMs) are performing poorly when compared with relatively simple empirical models over a wide range of metrics and environments. Atmospheric driving data appear to provide information about land surface fluxes that LSMs are not fully utilising. Here, we further quantify the information available in the meteorological forcing data that are used by LSMs for predicting land surface fluxes, by interrogating FLUXNET data, and extending the benchmarking methodology used in previous experiments. We show that substantial performance improvement is possible for empirical models using meteorological data alone, with no explicit vegetation or soil properties, thus setting lower bounds on a priori expectations on LSM performance. The process also identifies key meteorological variables that provide predictive power. We provide an ensemble of empirical benchmarks that are simple to reproduce and provide a range of behaviours and predictive performance, acting as a baseline benchmark set for future studies. We reanalyse previously published LSM simulations and show that there is more diversity between LSMs than previously indicated, although it remains unclear why LSMs are broadly performing so much worse than simple empirical models.
NASA Astrophysics Data System (ADS)
Chen, Dar-Hsin; Chou, Heng-Chih; Wang, David; Zaabar, Rim
2011-06-01
Most empirical research of the path-dependent, exotic-option credit risk model focuses on developed markets. Taking Taiwan as an example, this study investigates the bankruptcy prediction performance of the path-dependent, barrier option model in the emerging market. We adopt Duan's (1994) [11], (2000) [12] transformed-data maximum likelihood estimation (MLE) method to directly estimate the unobserved model parameters, and compare the predictive ability of the barrier option model to the commonly adopted credit risk model, Merton's model. Our empirical findings show that the barrier option model is more powerful than Merton's model in predicting bankruptcy in the emerging market. Moreover, we find that the barrier option model predicts bankruptcy much better for highly-leveraged firms. Finally, our findings indicate that the prediction accuracy of the credit risk model can be improved by higher asset liquidity and greater financial transparency.
A methodology for reduced order modeling and calibration of the upper atmosphere
NASA Astrophysics Data System (ADS)
Mehta, Piyush M.; Linares, Richard
2017-10-01
Atmospheric drag is the largest source of uncertainty in accurately predicting the orbit of satellites in low Earth orbit (LEO). Accurately predicting drag for objects that traverse LEO is critical to space situational awareness. Atmospheric models used for orbital drag calculations can be characterized either as empirical or physics-based (first principles based). Empirical models are fast to evaluate but offer limited real-time predictive/forecasting ability, while physics based models offer greater predictive/forecasting ability but require dedicated parallel computational resources. Also, calibration with accurate data is required for either type of models. This paper presents a new methodology based on proper orthogonal decomposition toward development of a quasi-physical, predictive, reduced order model that combines the speed of empirical and the predictive/forecasting capabilities of physics-based models. The methodology is developed to reduce the high dimensionality of physics-based models while maintaining its capabilities. We develop the methodology using the Naval Research Lab's Mass Spectrometer Incoherent Scatter model and show that the diurnal and seasonal variations can be captured using a small number of modes and parameters. We also present calibration of the reduced order model using the CHAMP and GRACE accelerometer-derived densities. Results show that the method performs well for modeling and calibration of the upper atmosphere.
Plant water potential improves prediction of empirical stomatal models.
Anderegg, William R L; Wolf, Adam; Arango-Velez, Adriana; Choat, Brendan; Chmura, Daniel J; Jansen, Steven; Kolb, Thomas; Li, Shan; Meinzer, Frederick; Pita, Pilar; Resco de Dios, Víctor; Sperry, John S; Wolfe, Brett T; Pacala, Stephen
2017-01-01
Climate change is expected to lead to increases in drought frequency and severity, with deleterious effects on many ecosystems. Stomatal responses to changing environmental conditions form the backbone of all ecosystem models, but are based on empirical relationships and are not well-tested during drought conditions. Here, we use a dataset of 34 woody plant species spanning global forest biomes to examine the effect of leaf water potential on stomatal conductance and test the predictive accuracy of three major stomatal models and a recently proposed model. We find that current leaf-level empirical models have consistent biases of over-prediction of stomatal conductance during dry conditions, particularly at low soil water potentials. Furthermore, the recently proposed stomatal conductance model yields increases in predictive capability compared to current models, and with particular improvement during drought conditions. Our results reveal that including stomatal sensitivity to declining water potential and consequent impairment of plant water transport will improve predictions during drought conditions and show that many biomes contain a diversity of plant stomatal strategies that range from risky to conservative stomatal regulation during water stress. Such improvements in stomatal simulation are greatly needed to help unravel and predict the response of ecosystems to future climate extremes.
Wavelet modeling and prediction of the stability of states: the Roman Empire and the European Union
NASA Astrophysics Data System (ADS)
Yaroshenko, Tatyana Y.; Krysko, Dmitri V.; Dobriyan, Vitalii; Zhigalov, Maksim V.; Vos, Hendrik; Vandenabeele, Peter; Krysko, Vadim A.
2015-09-01
How can the stability of a state be quantitatively determined and its future stability predicted? The rise and collapse of empires and states is very complex, and it is exceedingly difficult to understand and predict it. Existing theories are usually formulated as verbal models and, consequently, do not yield sharply defined, quantitative prediction that can be unambiguously validated with data. Here we describe a model that determines whether the state is in a stable or chaotic condition and predicts its future condition. The central model, which we test, is that growth and collapse of states is reflected by the changes of their territories, populations and budgets. The model was simulated within the historical societies of the Roman Empire (400 BC to 400 AD) and the European Union (1957-2007) by using wavelets and analysis of the sign change of the spectrum of Lyapunov exponents. The model matches well with the historical events. During wars and crises, the state becomes unstable; this is reflected in the wavelet analysis by a significant increase in the frequency ω (t) and wavelet coefficients W (ω, t) and the sign of the largest Lyapunov exponent becomes positive, indicating chaos. We successfully reconstructed and forecasted time series in the Roman Empire and the European Union by applying artificial neural network. The proposed model helps to quantitatively determine and forecast the stability of a state.
Mental workload prediction based on attentional resource allocation and information processing.
Xiao, Xu; Wanyan, Xiaoru; Zhuang, Damin
2015-01-01
Mental workload is an important component in complex human-machine systems. The limited applicability of empirical workload measures produces the need for workload modeling and prediction methods. In the present study, a mental workload prediction model is built on the basis of attentional resource allocation and information processing to ensure pilots' accuracy and speed in understanding large amounts of flight information on the cockpit display interface. Validation with an empirical study of an abnormal attitude recovery task showed that this model's prediction of mental workload highly correlated with experimental results. This mental workload prediction model provides a new tool for optimizing human factors interface design and reducing human errors.
Ecological Forecasting in Chesapeake Bay: Using a Mechanistic-Empirical Modelling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, C. W.; Hood, Raleigh R.; Long, Wen
The Chesapeake Bay Ecological Prediction System (CBEPS) automatically generates daily nowcasts and three-day forecasts of several environmental variables, such as sea-surface temperature and salinity, the concentrations of chlorophyll, nitrate, and dissolved oxygen, and the likelihood of encountering several noxious species, including harmful algal blooms and water-borne pathogens, for the purpose of monitoring the Bay's ecosystem. While the physical and biogeochemical variables are forecast mechanistically using the Regional Ocean Modeling System configured for the Chesapeake Bay, the species predictions are generated using a novel mechanistic empirical approach, whereby real-time output from the coupled physical biogeochemical model drives multivariate empirical habitat modelsmore » of the target species. The predictions, in the form of digital images, are available via the World Wide Web to interested groups to guide recreational, management, and research activities. Though full validation of the integrated forecasts for all species is still a work in progress, we argue that the mechanistic–empirical approach can be used to generate a wide variety of short-term ecological forecasts, and that it can be applied in any marine system where sufficient data exist to develop empirical habitat models. This paper provides an overview of this system, its predictions, and the approach taken.« less
A Formal Approach to Empirical Dynamic Model Optimization and Validation
NASA Technical Reports Server (NTRS)
Crespo, Luis G; Morelli, Eugene A.; Kenny, Sean P.; Giesy, Daniel P.
2014-01-01
A framework was developed for the optimization and validation of empirical dynamic models subject to an arbitrary set of validation criteria. The validation requirements imposed upon the model, which may involve several sets of input-output data and arbitrary specifications in time and frequency domains, are used to determine if model predictions are within admissible error limits. The parameters of the empirical model are estimated by finding the parameter realization for which the smallest of the margins of requirement compliance is as large as possible. The uncertainty in the value of this estimate is characterized by studying the set of model parameters yielding predictions that comply with all the requirements. Strategies are presented for bounding this set, studying its dependence on admissible prediction error set by the analyst, and evaluating the sensitivity of the model predictions to parameter variations. This information is instrumental in characterizing uncertainty models used for evaluating the dynamic model at operating conditions differing from those used for its identification and validation. A practical example based on the short period dynamics of the F-16 is used for illustration.
Food web complexity and stability across habitat connectivity gradients.
LeCraw, Robin M; Kratina, Pavel; Srivastava, Diane S
2014-12-01
The effects of habitat connectivity on food webs have been studied both empirically and theoretically, yet the question of whether empirical results support theoretical predictions for any food web metric other than species richness has received little attention. Our synthesis brings together theory and empirical evidence for how habitat connectivity affects both food web stability and complexity. Food web stability is often predicted to be greatest at intermediate levels of connectivity, representing a compromise between the stabilizing effects of dispersal via rescue effects and prey switching, and the destabilizing effects of dispersal via regional synchronization of population dynamics. Empirical studies of food web stability generally support both this pattern and underlying mechanisms. Food chain length has been predicted to have both increasing and unimodal relationships with connectivity as a result of predators being constrained by the patch occupancy of their prey. Although both patterns have been documented empirically, the underlying mechanisms may differ from those predicted by models. In terms of other measures of food web complexity, habitat connectivity has been empirically found to generally increase link density but either reduce or have no effect on connectance, whereas a unimodal relationship is expected. In general, there is growing concordance between empirical patterns and theoretical predictions for some effects of habitat connectivity on food webs, but many predictions remain to be tested over a full connectivity gradient, and empirical metrics of complexity are rarely modeled. Closing these gaps will allow a deeper understanding of how natural and anthropogenic changes in connectivity can affect real food webs.
Modeling the risk of water pollution by pesticides from imbalanced data.
Trajanov, Aneta; Kuzmanovski, Vladimir; Real, Benoit; Perreau, Jonathan Marks; Džeroski, Sašo; Debeljak, Marko
2018-04-30
The pollution of ground and surface waters with pesticides is a serious ecological issue that requires adequate treatment. Most of the existing water pollution models are mechanistic mathematical models. While they have made a significant contribution to understanding the transfer processes, they face the problem of validation because of their complexity, the user subjectivity in their parameterization, and the lack of empirical data for validation. In addition, the data describing water pollution with pesticides are, in most cases, very imbalanced. This is due to strict regulations for pesticide applications, which lead to only a few pollution events. In this study, we propose the use of data mining to build models for assessing the risk of water pollution by pesticides in field-drained outflow water. Unlike the mechanistic models, the models generated by data mining are based on easily obtainable empirical data, while the parameterization of the models is not influenced by the subjectivity of ecological modelers. We used empirical data from field trials at the La Jaillière experimental site in France and applied the random forests algorithm to build predictive models that predict "risky" and "not-risky" pesticide application events. To address the problems of the imbalanced classes in the data, cost-sensitive learning and different measures of predictive performance were used. Despite the high imbalance between risky and not-risky application events, we managed to build predictive models that make reliable predictions. The proposed modeling approach can be easily applied to other ecological modeling problems where we encounter empirical data with highly imbalanced classes.
Prediction of Very High Reynolds Number Compressible Skin Friction
NASA Technical Reports Server (NTRS)
Carlson, John R.
1998-01-01
Flat plate skin friction calculations over a range of Mach numbers from 0.4 to 3.5 at Reynolds numbers from 16 million to 492 million using a Navier Stokes method with advanced turbulence modeling are compared with incompressible skin friction coefficient correlations. The semi-empirical correlation theories of van Driest; Cope; Winkler and Cha; and Sommer and Short T' are used to transform the predicted skin friction coefficients of solutions using two algebraic Reynolds stress turbulence models in the Navier-Stokes method PAB3D. In general, the predicted skin friction coefficients scaled well with each reference temperature theory though, overall the theory by Sommer and Short appeared to best collapse the predicted coefficients. At the lower Reynolds number 3 to 30 million, both the Girimaji and Shih, Zhu and Lumley turbulence models predicted skin-friction coefficients within 2% of the semi-empirical correlation skin friction coefficients. At the higher Reynolds numbers of 100 to 500 million, the turbulence models by Shih, Zhu and Lumley and Girimaji predicted coefficients that were 6% less and 10% greater, respectively, than the semi-empirical coefficients.
Kabore, Achille; Biritwum, Nana-Kwadwo; Downs, Philip W.; Soares Magalhaes, Ricardo J.; Zhang, Yaobi; Ottesen, Eric A.
2013-01-01
Background Mapping the distribution of schistosomiasis is essential to determine where control programs should operate, but because it is impractical to assess infection prevalence in every potentially endemic community, model-based geostatistics (MBG) is increasingly being used to predict prevalence and determine intervention strategies. Methodology/Principal Findings To assess the accuracy of MBG predictions for Schistosoma haematobium infection in Ghana, school surveys were evaluated at 79 sites to yield empiric prevalence values that could be compared with values derived from recently published MBG predictions. Based on these findings schools were categorized according to WHO guidelines so that practical implications of any differences could be determined. Using the mean predicted values alone, 21 of the 25 empirically determined ‘high-risk’ schools requiring yearly praziquantel would have been undertreated and almost 20% of the remaining schools would have been treated despite empirically-determined absence of infection – translating into 28% of the children in the 79 schools being undertreated and 12% receiving treatment in the absence of any demonstrated need. Conclusions/Significance Using the current predictive map for Ghana as a spatial decision support tool by aggregating prevalence estimates to the district level was clearly not adequate for guiding the national program, but the alternative of assessing each school in potentially endemic areas of Ghana or elsewhere is not at all feasible; modelling must be a tool complementary to empiric assessments. Thus for practical usefulness, predictive risk mapping should not be thought of as a one-time exercise but must, as in the current study, be an iterative process that incorporates empiric testing and model refining to create updated versions that meet the needs of disease control operational managers. PMID:23505584
Volatility in financial markets: stochastic models and empirical results
NASA Astrophysics Data System (ADS)
Miccichè, Salvatore; Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.
2002-11-01
We investigate the historical volatility of the 100 most capitalized stocks traded in US equity markets. An empirical probability density function (pdf) of volatility is obtained and compared with the theoretical predictions of a lognormal model and of the Hull and White model. The lognormal model well describes the pdf in the region of low values of volatility whereas the Hull and White model better approximates the empirical pdf for large values of volatility. Both models fail in describing the empirical pdf over a moderately large volatility range.
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Forecasting stochastic neural network based on financial empirical mode decomposition.
Wang, Jie; Wang, Jun
2017-06-01
In an attempt to improve the forecasting accuracy of stock price fluctuations, a new one-step-ahead model is developed in this paper which combines empirical mode decomposition (EMD) with stochastic time strength neural network (STNN). The EMD is a processing technique introduced to extract all the oscillatory modes embedded in a series, and the STNN model is established for considering the weight of occurrence time of the historical data. The linear regression performs the predictive availability of the proposed model, and the effectiveness of EMD-STNN is revealed clearly through comparing the predicted results with the traditional models. Moreover, a new evaluated method (q-order multiscale complexity invariant distance) is applied to measure the predicted results of real stock index series, and the empirical results show that the proposed model indeed displays a good performance in forecasting stock market fluctuations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
Empirical models for the shielding and refection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and rejection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Empirical Models for the Shielding and Reflection of Jet Mixing Noise by a Surface
NASA Technical Reports Server (NTRS)
Brown, Clifford A.
2016-01-01
Empirical models for the shielding and reflection of jet mixing noise by a nearby surface are described and the resulting models evaluated. The flow variables are used to non-dimensionalize the surface position variables, reducing the variable space and producing models that are linear function of non-dimensional surface position and logarithmic in Strouhal frequency. A separate set of coefficients are determined at each observer angle in the dataset and linear interpolation is used to for the intermediate observer angles. The shielding and reflection models are then combined with existing empirical models for the jet mixing and jet-surface interaction noise sources to produce predicted spectra for a jet operating near a surface. These predictions are then evaluated against experimental data.
Tracking Expected Improvements of Decadal Prediction in Climate Services
NASA Astrophysics Data System (ADS)
Suckling, E.; Thompson, E.; Smith, L. A.
2013-12-01
Physics-based simulation models are ultimately expected to provide the best available (decision-relevant) probabilistic climate predictions, as they can capture the dynamics of the Earth System across a range of situations, situations for which observations for the construction of empirical models are scant if not nonexistent. This fact in itself provides neither evidence that predictions from today's Earth Systems Models will outperform today's empirical models, nor a guide to the space and time scales on which today's model predictions are adequate for a given purpose. Empirical (data-based) models are employed to make probability forecasts on decadal timescales. The skill of these forecasts is contrasted with that of state-of-the-art climate models, and the challenges faced by each approach are discussed. The focus is on providing decision-relevant probability forecasts for decision support. An empirical model, known as Dynamic Climatology is shown to be competitive with CMIP5 climate models on decadal scale probability forecasts. Contrasting the skill of simulation models not only with each other but also with empirical models can reveal the space and time scales on which a generation of simulation models exploits their physical basis effectively. It can also quantify their ability to add information in the formation of operational forecasts. Difficulties (i) of information contamination (ii) of the interpretation of probabilistic skill and (iii) of artificial skill complicate each modelling approach, and are discussed. "Physics free" empirical models provide fixed, quantitative benchmarks for the evaluation of ever more complex climate models, that is not available from (inter)comparisons restricted to only complex models. At present, empirical models can also provide a background term for blending in the formation of probability forecasts from ensembles of simulation models. In weather forecasting this role is filled by the climatological distribution, and can significantly enhance the value of longer lead-time weather forecasts to those who use them. It is suggested that the direct comparison of simulation models with empirical models become a regular component of large model forecast intercomparison and evaluation. This would clarify the extent to which a given generation of state-of-the-art simulation models provide information beyond that available from simpler empirical models. It would also clarify current limitations in using simulation forecasting for decision support. No model-based probability forecast is complete without a quantitative estimate if its own irrelevance; this estimate is likely to increase as a function of lead time. A lack of decision-relevant quantitative skill would not bring the science-based foundation of anthropogenic warming into doubt. Similar levels of skill with empirical models does suggest a clear quantification of limits, as a function of lead time, for spatial and temporal scales on which decisions based on such model output are expected to prove maladaptive. Failing to clearly state such weaknesses of a given generation of simulation models, while clearly stating their strength and their foundation, risks the credibility of science in support of policy in the long term.
Decision-support models for empiric antibiotic selection in Gram-negative bloodstream infections.
MacFadden, D R; Coburn, B; Shah, N; Robicsek, A; Savage, R; Elligsen, M; Daneman, N
2018-04-25
Early empiric antibiotic therapy in patients can improve clinical outcomes in Gram-negative bacteraemia. However, the widespread prevalence of antibiotic-resistant pathogens compromises our ability to provide adequate therapy while minimizing use of broad antibiotics. We sought to determine whether readily available electronic medical record data could be used to develop predictive models for decision support in Gram-negative bacteraemia. We performed a multi-centre cohort study, in Canada and the USA, of hospitalized patients with Gram-negative bloodstream infection from April 2010 to March 2015. We analysed multivariable models for prediction of antibiotic susceptibility at two empiric windows: Gram-stain-guided and pathogen-guided treatment. Decision-support models for empiric antibiotic selection were developed based on three clinical decision thresholds of acceptable adequate coverage (80%, 90% and 95%). A total of 1832 patients with Gram-negative bacteraemia were evaluated. Multivariable models showed good discrimination across countries and at both Gram-stain-guided (12 models, areas under the curve (AUCs) 0.68-0.89, optimism-corrected AUCs 0.63-0.85) and pathogen-guided (12 models, AUCs 0.75-0.98, optimism-corrected AUCs 0.64-0.95) windows. Compared to antibiogram-guided therapy, decision-support models of antibiotic selection incorporating individual patient characteristics and prior culture results have the potential to increase use of narrower-spectrum antibiotics (in up to 78% of patients) while reducing inadequate therapy. Multivariable models using readily available epidemiologic factors can be used to predict antimicrobial susceptibility in infecting pathogens with reasonable discriminatory ability. Implementation of sequential predictive models for real-time individualized empiric antibiotic decision-making has the potential to both optimize adequate coverage for patients while minimizing overuse of broad-spectrum antibiotics, and therefore requires further prospective evaluation. Readily available epidemiologic risk factors can be used to predict susceptibility of Gram-negative organisms among patients with bacteraemia, using automated decision-making models. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Schädler, Marc R; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than -20 dB could not be predicted.
Schädler, Marc R.; Warzybok, Anna; Kollmeier, Birger
2018-01-01
The simulation framework for auditory discrimination experiments (FADE) was adopted and validated to predict the individual speech-in-noise recognition performance of listeners with normal and impaired hearing with and without a given hearing-aid algorithm. FADE uses a simple automatic speech recognizer (ASR) to estimate the lowest achievable speech reception thresholds (SRTs) from simulated speech recognition experiments in an objective way, independent from any empirical reference data. Empirical data from the literature were used to evaluate the model in terms of predicted SRTs and benefits in SRT with the German matrix sentence recognition test when using eight single- and multichannel binaural noise-reduction algorithms. To allow individual predictions of SRTs in binaural conditions, the model was extended with a simple better ear approach and individualized by taking audiograms into account. In a realistic binaural cafeteria condition, FADE explained about 90% of the variance of the empirical SRTs for a group of normal-hearing listeners and predicted the corresponding benefits with a root-mean-square prediction error of 0.6 dB. This highlights the potential of the approach for the objective assessment of benefits in SRT without prior knowledge about the empirical data. The predictions for the group of listeners with impaired hearing explained 75% of the empirical variance, while the individual predictions explained less than 25%. Possibly, additional individual factors should be considered for more accurate predictions with impaired hearing. A competing talker condition clearly showed one limitation of current ASR technology, as the empirical performance with SRTs lower than −20 dB could not be predicted. PMID:29692200
Empirical and semi-analytical models for predicting peak outflows caused by embankment dam failures
NASA Astrophysics Data System (ADS)
Wang, Bo; Chen, Yunliang; Wu, Chao; Peng, Yong; Song, Jiajun; Liu, Wenjun; Liu, Xin
2018-07-01
Prediction of peak discharge of floods has attracted great attention for researchers and engineers. In present study, nine typical nonlinear mathematical models are established based on database of 40 historical dam failures. The first eight models that were developed with a series of regression analyses are purely empirical, while the last one is a semi-analytical approach that was derived from an analytical solution of dam-break floods in a trapezoidal channel. Water depth above breach invert (Hw), volume of water stored above breach invert (Vw), embankment length (El), and average embankment width (Ew) are used as independent variables to develop empirical formulas of estimating the peak outflow from breached embankment dams. It is indicated from the multiple regression analysis that a function using the former two variables (i.e., Hw and Vw) produce considerably more accurate results than that using latter two variables (i.e., El and Ew). It is shown that the semi-analytical approach works best in terms of both prediction accuracy and uncertainty, and the established empirical models produce considerably reasonable results except the model only using El. Moreover, present models have been compared with other models available in literature for estimating peak discharge.
Using Empirical Models for Communication Prediction of Spacecraft
NASA Technical Reports Server (NTRS)
Quasny, Todd
2015-01-01
A viable communication path to a spacecraft is vital for its successful operation. For human spaceflight, a reliable and predictable communication link between the spacecraft and the ground is essential not only for the safety of the vehicle and the success of the mission, but for the safety of the humans on board as well. However, analytical models of these communication links are challenged by unique characteristics of space and the vehicle itself. For example, effects of radio frequency during high energy solar events while traveling through a solar array of a spacecraft can be difficult to model, and thus to predict. This presentation covers the use of empirical methods of communication link predictions, using the International Space Station (ISS) and its associated historical data as the verification platform and test bed. These empirical methods can then be incorporated into communication prediction and automation tools for the ISS in order to better understand the quality of the communication path given a myriad of variables, including solar array positions, line of site to satellites, position of the sun, and other dynamic structures on the outside of the ISS. The image on the left below show the current analytical model of one of the communication systems on the ISS. The image on the right shows a rudimentary empirical model of the same system based on historical archived data from the ISS.
Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter
NASA Technical Reports Server (NTRS)
Mahajan, Aparajit J.; Kaza, Krishna Rao V.
1992-01-01
A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.
Semi-empirical model for prediction of unsteady forces on an airfoil with application to flutter
NASA Technical Reports Server (NTRS)
Mahajan, A. J.; Kaza, K. R. V.; Dowell, E. H.
1993-01-01
A semi-empirical model is described for predicting unsteady aerodynamic forces on arbitrary airfoils under mildly stalled and unstalled conditions. Aerodynamic forces are modeled using second order ordinary differential equations for lift and moment with airfoil motion as the input. This model is simultaneously integrated with structural dynamics equations to determine flutter characteristics for a two degrees-of-freedom system. Results for a number of cases are presented to demonstrate the suitability of this model to predict flutter. Comparison is made to the flutter characteristics determined by a Navier-Stokes solver and also the classical incompressible potential flow theory.
NASA Technical Reports Server (NTRS)
Sebok, Angelia; Wickens, Christopher; Sargent, Robert
2015-01-01
One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.
Prediction of early summer rainfall over South China by a physical-empirical model
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen
2014-10-01
In early summer (May-June, MJ) the strongest rainfall belt of the northern hemisphere occurs over the East Asian (EA) subtropical front. During this period the South China (SC) rainfall reaches its annual peak and represents the maximum rainfall variability over EA. Hence we establish an SC rainfall index, which is the MJ mean precipitation averaged over 72 stations over SC (south of 28°N and east of 110°E) and represents superbly the leading empirical orthogonal function mode of MJ precipitation variability over EA. In order to predict SC rainfall, we established a physical-empirical model. Analysis of 34-year observations (1979-2012) reveals three physically consequential predictors. A plentiful SC rainfall is preceded in the previous winter by (a) a dipole sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (b) a tripolar SST tendency in North Atlantic Ocean, and (c) a warming tendency in northern Asia. These precursors foreshadow enhanced Philippine Sea subtropical High and Okhotsk High in early summer, which are controlling factors for enhanced subtropical frontal rainfall. The physical empirical model built on these predictors achieves a cross-validated forecast correlation skill of 0.75 for 1979-2012. Surprisingly, this skill is substantially higher than four-dynamical models' ensemble prediction for 1979-2010 period (0.15). The results here suggest that the low prediction skill of current dynamical models is largely due to models' deficiency and the dynamical prediction has large room to improve.
Bobovská, Adela; Tvaroška, Igor; Kóňa, Juraj
2016-05-01
Human Golgi α-mannosidase II (GMII), a zinc ion co-factor dependent glycoside hydrolase (E.C.3.2.1.114), is a pharmaceutical target for the design of inhibitors with anti-cancer activity. The discovery of an effective inhibitor is complicated by the fact that all known potent inhibitors of GMII are involved in unwanted co-inhibition with lysosomal α-mannosidase (LMan, E.C.3.2.1.24), a relative to GMII. Routine empirical QSAR models for both GMII and LMan did not work with a required accuracy. Therefore, we have developed a fast computational protocol to build predictive models combining interaction energy descriptors from an empirical docking scoring function (Glide-Schrödinger), Linear Interaction Energy (LIE) method, and quantum mechanical density functional theory (QM-DFT) calculations. The QSAR models were built and validated with a library of structurally diverse GMII and LMan inhibitors and non-active compounds. A critical role of QM-DFT descriptors for the more accurate prediction abilities of the models is demonstrated. The predictive ability of the models was significantly improved when going from the empirical docking scoring function to mixed empirical-QM-DFT QSAR models (Q(2)=0.78-0.86 when cross-validation procedures were carried out; and R(2)=0.81-0.83 for a testing set). The average error for the predicted ΔGbind decreased to 0.8-1.1kcalmol(-1). Also, 76-80% of non-active compounds were successfully filtered out from GMII and LMan inhibitors. The QSAR models with the fragmented QM-DFT descriptors may find a useful application in structure-based drug design where pure empirical and force field methods reached their limits and where quantum mechanics effects are critical for ligand-receptor interactions. The optimized models will apply in lead optimization processes for GMII drug developments. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jaber, Abobaker M.
2014-12-01
Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.
Predicting Operator Execution Times Using CogTool
NASA Technical Reports Server (NTRS)
Santiago-Espada, Yamira; Latorella, Kara A.
2013-01-01
Researchers and developers of NextGen systems can use predictive human performance modeling tools as an initial approach to obtain skilled user performance times analytically, before system testing with users. This paper describes the CogTool models for a two pilot crew executing two different types of a datalink clearance acceptance tasks, and on two different simulation platforms. The CogTool time estimates for accepting and executing Required Time of Arrival and Interval Management clearances were compared to empirical data observed in video tapes and registered in simulation files. Results indicate no statistically significant difference between empirical data and the CogTool predictions. A population comparison test found no significant differences between the CogTool estimates and the empirical execution times for any of the four test conditions. We discuss modeling caveats and considerations for applying CogTool to crew performance modeling in advanced cockpit environments.
Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2016-12-01
Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.
Iowa calibration of MEPDG performance prediction models.
DOT National Transportation Integrated Search
2013-06-01
This study aims to improve the accuracy of AASHTO Mechanistic-Empirical Pavement Design Guide (MEPDG) pavement : performance predictions for Iowa pavement systems through local calibration of MEPDG prediction models. A total of 130 : representative p...
Hybrid BEM/empirical approach for scattering of correlated sources in rocket noise prediction
NASA Astrophysics Data System (ADS)
Barbarino, Mattia; Adamo, Francesco P.; Bianco, Davide; Bartoccini, Daniele
2017-09-01
Empirical models such as the Eldred standard model are commonly used for rocket noise prediction. Such models directly provide a definition of the Sound Pressure Level through the quadratic pressure term by uncorrelated sources. In this paper, an improvement of the Eldred Standard model has been formulated. This new formulation contains an explicit expression for the acoustic pressure of each noise source, in terms of amplitude and phase, in order to investigate the sources correlation effects and to propagate them through a wave equation. In particular, the correlation effects between adjacent and not-adjacent sources have been modeled and analyzed. The noise prediction obtained with the revised Eldred-based model has then been used for formulating an empirical/BEM (Boundary Element Method) hybrid approach that allows an evaluation of the scattering effects. In the framework of the European Space Agency funded program VECEP (VEga Consolidation and Evolution Programme), these models have been applied for the prediction of the aeroacoustics loads of the VEGA (Vettore Europeo di Generazione Avanzata - Advanced Generation European Carrier Rocket) launch vehicle at lift-off and the results have been compared with experimental data.
An empirically-based model for the lift coefficients of twisted airfoils with leading-edge tubercles
NASA Astrophysics Data System (ADS)
Ni, Zao; Su, Tsung-chow; Dhanak, Manhar
2018-04-01
Experimental data for untwisted airfoils are utilized to propose a model for predicting the lift coefficients of twisted airfoils with leading-edge tubercles. The effectiveness of the empirical model is verified through comparison with results of a corresponding computational fluid-dynamic (CFD) study. The CFD study is carried out for both twisted and untwisted airfoils with tubercles, the latter shown to compare well with available experimental data. Lift coefficients of twisted airfoils predicted from the proposed empirically-based model match well with the corresponding coefficients determined using the verified CFD study. Flow details obtained from the latter provide better insight into the underlying mechanism and behavior at stall of twisted airfoils with leading edge tubercles.
Størset, Elisabet; Holford, Nick; Hennig, Stefanie; Bergmann, Troels K; Bergan, Stein; Bremer, Sara; Åsberg, Anders; Midtvedt, Karsten; Staatz, Christine E
2014-09-01
The aim was to develop a theory-based population pharmacokinetic model of tacrolimus in adult kidney transplant recipients and to externally evaluate this model and two previous empirical models. Data were obtained from 242 patients with 3100 tacrolimus whole blood concentrations. External evaluation was performed by examining model predictive performance using Bayesian forecasting. Pharmacokinetic disposition parameters were estimated based on tacrolimus plasma concentrations, predicted from whole blood concentrations, haematocrit and literature values for tacrolimus binding to red blood cells. Disposition parameters were allometrically scaled to fat free mass. Tacrolimus whole blood clearance/bioavailability standardized to haematocrit of 45% and fat free mass of 60 kg was estimated to be 16.1 l h−1 [95% CI 12.6, 18.0 l h−1]. Tacrolimus clearance was 30% higher (95% CI 13, 46%) and bioavailability 18% lower (95% CI 2, 29%) in CYP3A5 expressers compared with non-expressers. An Emax model described decreasing tacrolimus bioavailability with increasing prednisolone dose. The theory-based model was superior to the empirical models during external evaluation displaying a median prediction error of −1.2% (95% CI −3.0, 0.1%). Based on simulation, Bayesian forecasting led to 65% (95% CI 62, 68%) of patients achieving a tacrolimus average steady-state concentration within a suggested acceptable range. A theory-based population pharmacokinetic model was superior to two empirical models for prediction of tacrolimus concentrations and seemed suitable for Bayesian prediction of tacrolimus doses early after kidney transplantation.
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
This paper investigates the forecasting ability of Mallows Model Averaging (MMA) by conducting an empirical analysis of five Asia countries, Malaysia, Thailand, Philippines, Indonesia and China's GDP growth rate. Results reveal that MMA has no noticeable differences in predictive ability compared to the general autoregressive fractional integrated moving average model (ARFIMA) and its predictive ability is sensitive to the effect of financial crisis. MMA could be an alternative forecasting method for samples without recent outliers such as financial crisis.
Galindo-Romero, Marta; Lippert, Tristan; Gavrilov, Alexander
2015-12-01
This paper presents an empirical linear equation to predict peak pressure level of anthropogenic impulsive signals based on its correlation with the sound exposure level. The regression coefficients are shown to be weakly dependent on the environmental characteristics but governed by the source type and parameters. The equation can be applied to values of the sound exposure level predicted with a numerical model, which provides a significant improvement in the prediction of the peak pressure level. Part I presents the analysis for airgun arrays signals, and Part II considers the application of the empirical equation to offshore impact piling noise.
Predicting the particle size distribution of eroded sediment using artificial neural networks.
Lagos-Avid, María Paz; Bonilla, Carlos A
2017-03-01
Water erosion causes soil degradation and nonpoint pollution. Pollutants are primarily transported on the surfaces of fine soil and sediment particles. Several soil loss models and empirical equations have been developed for the size distribution estimation of the sediment leaving the field, including the physically-based models and empirical equations. Usually, physically-based models require a large amount of data, sometimes exceeding the amount of available data in the modeled area. Conversely, empirical equations do not always predict the sediment composition associated with individual events and may require data that are not always available. Therefore, the objective of this study was to develop a model to predict the particle size distribution (PSD) of eroded soil. A total of 41 erosion events from 21 soils were used. These data were compiled from previous studies. Correlation and multiple regression analyses were used to identify the main variables controlling sediment PSD. These variables were the particle size distribution in the soil matrix, the antecedent soil moisture condition, soil erodibility, and hillslope geometry. With these variables, an artificial neural network was calibrated using data from 29 events (r 2 =0.98, 0.97, and 0.86; for sand, silt, and clay in the sediment, respectively) and then validated and tested on 12 events (r 2 =0.74, 0.85, and 0.75; for sand, silt, and clay in the sediment, respectively). The artificial neural network was compared with three empirical models. The network presented better performance in predicting sediment PSD and differentiating rain-runoff events in the same soil. In addition to the quality of the particle distribution estimates, this model requires a small number of easily obtained variables, providing a convenient routine for predicting PSD in eroded sediment in other pollutant transport models. Copyright © 2017 Elsevier B.V. All rights reserved.
A robust empirical seasonal prediction of winter NAO and surface climate.
Wang, L; Ting, M; Kushner, P J
2017-03-21
A key determinant of winter weather and climate in Europe and North America is the North Atlantic Oscillation (NAO), the dominant mode of atmospheric variability in the Atlantic domain. Skilful seasonal forecasting of the surface climate in both Europe and North America is reflected largely in how accurately models can predict the NAO. Most dynamical models, however, have limited skill in seasonal forecasts of the winter NAO. A new empirical model is proposed for the seasonal forecast of the winter NAO that exhibits higher skill than current dynamical models. The empirical model provides robust and skilful prediction of the December-January-February (DJF) mean NAO index using a multiple linear regression (MLR) technique with autumn conditions of sea-ice concentration, stratospheric circulation, and sea-surface temperature. The predictability is, for the most part, derived from the relatively long persistence of sea ice in the autumn. The lower stratospheric circulation and sea-surface temperature appear to play more indirect roles through a series of feedbacks among systems driving NAO evolution. This MLR model also provides skilful seasonal outlooks of winter surface temperature and precipitation over many regions of Eurasia and eastern North America.
Accuracy Analysis of a Box-wing Theoretical SRP Model
NASA Astrophysics Data System (ADS)
Wang, Xiaoya; Hu, Xiaogong; Zhao, Qunhe; Guo, Rui
2016-07-01
For Beidou satellite navigation system (BDS) a high accuracy SRP model is necessary for high precise applications especially with Global BDS establishment in future. The BDS accuracy for broadcast ephemeris need be improved. So, a box-wing theoretical SRP model with fine structure and adding conical shadow factor of earth and moon were established. We verified this SRP model by the GPS Block IIF satellites. The calculation was done with the data of PRN 1, 24, 25, 27 satellites. The results show that the physical SRP model for POD and forecast for GPS IIF satellite has higher accuracy with respect to Bern empirical model. The 3D-RMS of orbit is about 20 centimeters. The POD accuracy for both models is similar but the prediction accuracy with the physical SRP model is more than doubled. We tested 1-day 3-day and 7-day orbit prediction. The longer is the prediction arc length, the more significant is the improvement. The orbit prediction accuracy with the physical SRP model for 1-day, 3-day and 7-day arc length are 0.4m, 2.0m, 10.0m respectively. But they are 0.9m, 5.5m and 30m with Bern empirical model respectively. We apply this means to the BDS and give out a SRP model for Beidou satellites. Then we test and verify the model with Beidou data of one month only for test. Initial results show the model is good but needs more data for verification and improvement. The orbit residual RMS is similar to that with our empirical force model which only estimate the force for along track, across track direction and y-bias. But the orbit overlap and SLR observation evaluation show some improvement. The remaining empirical force is reduced significantly for present Beidou constellation.
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Gordon, J.A.; Freedman, B.R.; Zuskov, A.; Iozzo, R.V.; Birk, D.E.; Soslowsky, L.J.
2015-01-01
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn−/−) and biglycan-null (Bgn−/−) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. PMID:25888014
Gordon, J A; Freedman, B R; Zuskov, A; Iozzo, R V; Birk, D E; Soslowsky, L J
2015-07-16
Achilles tendons are a common source of pain and injury, and their pathology may originate from aberrant structure function relationships. Small leucine rich proteoglycans (SLRPs) influence mechanical and structural properties in a tendon-specific manner. However, their roles in the Achilles tendon have not been defined. The objective of this study was to evaluate the mechanical and structural differences observed in mouse Achilles tendons lacking class I SLRPs; either decorin or biglycan. In addition, empirical modeling techniques based on mechanical and image-based measures were employed. Achilles tendons from decorin-null (Dcn(-/-)) and biglycan-null (Bgn(-/-)) C57BL/6 female mice (N=102) were used. Each tendon underwent a dynamic mechanical testing protocol including simultaneous polarized light image capture to evaluate both structural and mechanical properties of each Achilles tendon. An empirical damage model was adapted for application to genetic variation and for use with image based structural properties to predict tendon dynamic mechanical properties. We found that Achilles tendons lacking decorin and biglycan had inferior mechanical and structural properties that were age dependent; and that simple empirical models, based on previously described damage models, were predictive of Achilles tendon dynamic modulus in both decorin- and biglycan-null mice. Copyright © 2015 Elsevier Ltd. All rights reserved.
Quantitative model of the growth of floodplains by vertical accretion
Moody, J.A.; Troutman, B.M.
2000-01-01
A simple one-dimensional model is developed to quantitatively predict the change in elevation, over a period of decades, for vertically accreting floodplains. This unsteady model approximates the monotonic growth of a floodplain as an incremental but constant increase of net sediment deposition per flood for those floods of a partial duration series that exceed a threshold discharge corresponding to the elevation of the floodplain. Sediment deposition from each flood increases the elevation of the floodplain and consequently the magnitude of the threshold discharge resulting in a decrease in the number of floods and growth rate of the floodplain. Floodplain growth curves predicted by this model are compared to empirical growth curves based on dendrochronology and to direct field measurements at five floodplain sites. The model was used to predict the value of net sediment deposition per flood which best fits (in a least squares sense) the empirical and field measurements; these values fall within the range of independent estimates of the net sediment deposition per flood based on empirical equations. These empirical equations permit the application of the model to estimate of floodplain growth for other floodplains throughout the world which do not have detailed data of sediment deposition during individual floods. Copyright (C) 2000 John Wiley and Sons, Ltd.
Testing Feedback Models with Nearby Star Forming Regions
NASA Astrophysics Data System (ADS)
Doran, E.; Crowther, P.
2012-12-01
The feedback from massive stars plays a crucial role in the evolution of galaxies. Accurate modelling of this feedback is essential in understanding distant star forming regions. Young nearby, high mass (> 104 M⊙) clusters such as R136 (in the 30 Doradus region) are ideal test beds for population synthesis since they host large numbers of spatially resolved massive stars at a pre-supernovae stage. We present a quantitative comparison of empirical calibrations of radiative and mechanical feedback from individual stars in R136, with instantaneous burst predictions from the popular Starburst99 evolution synthesis code. We find that empirical results exceed predictions by factors of ˜3-9, as a result of limiting simulations to an upper limit of 100 M⊙. 100-300 M⊙ stars should to be incorporated in population synthesis models for high mass clusters to bring predictions into close agreement with empirical results.
Models for predicting fuel consumption in sagebrush-dominated ecosystems
Clinton S. Wright
2013-01-01
Fuel consumption predictions are necessary to accurately estimate or model fire effects, including pollutant emissions during wildland fires. Fuel and environmental measurements on a series of operational prescribed fires were used to develop empirical models for predicting fuel consumption in big sagebrush (Artemisia tridentate Nutt.) ecosystems....
Hristov, A N; Kebreab, E; Niu, M; Oh, J; Bannink, A; Bayat, A R; Boland, T B; Brito, A F; Casper, D P; Crompton, L A; Dijkstra, J; Eugène, M; Garnsworthy, P C; Haque, N; Hellwing, A L F; Huhtanen, P; Kreuzer, M; Kuhla, B; Lund, P; Madsen, J; Martin, C; Moate, P J; Muetzel, S; Muñoz, C; Peiren, N; Powell, J M; Reynolds, C K; Schwarm, A; Shingfield, K J; Storlien, T M; Weisbjerg, M R; Yáñez-Ruiz, D R; Yu, Z
2018-04-18
Ruminant production systems are important contributors to anthropogenic methane (CH 4 ) emissions, but there are large uncertainties in national and global livestock CH 4 inventories. Sources of uncertainty in enteric CH 4 emissions include animal inventories, feed dry matter intake (DMI), ingredient and chemical composition of the diets, and CH 4 emission factors. There is also significant uncertainty associated with enteric CH 4 measurements. The most widely used techniques are respiration chambers, the sulfur hexafluoride (SF 6 ) tracer technique, and the automated head-chamber system (GreenFeed; C-Lock Inc., Rapid City, SD). All 3 methods have been successfully used in a large number of experiments with dairy or beef cattle in various environmental conditions, although studies that compare techniques have reported inconsistent results. Although different types of models have been developed to predict enteric CH 4 emissions, relatively simple empirical (statistical) models have been commonly used for inventory purposes because of their broad applicability and ease of use compared with more detailed empirical and process-based mechanistic models. However, extant empirical models used to predict enteric CH 4 emissions suffer from narrow spatial focus, limited observations, and limitations of the statistical technique used. Therefore, prediction models must be developed from robust data sets that can only be generated through collaboration of scientists across the world. To achieve high prediction accuracy, these data sets should encompass a wide range of diets and production systems within regions and globally. Overall, enteric CH 4 prediction models are based on various animal or feed characteristic inputs but are dominated by DMI in one form or another. As a result, accurate prediction of DMI is essential for accurate prediction of livestock CH 4 emissions. Analysis of a large data set of individual dairy cattle data showed that simplified enteric CH 4 prediction models based on DMI alone or DMI and limited feed- or animal-related inputs can predict average CH 4 emission with a similar accuracy to more complex empirical models. These simplified models can be reliably used for emission inventory purposes. The Authors. Published by FASS Inc. and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).
A Proposed Change to ITU-R Recommendation 681
NASA Technical Reports Server (NTRS)
Davarian, F.
1996-01-01
Recommendation 681 of the International Telecommunications Union (ITU) provides five models for the prediction of propagation effects on land mobile satellite links: empirical roadside shadowing (ERS), attenuation frequency scaling, fade duration distribution, non-fade duration distribution, and fading due to multipath. Because the above prediction models have been empirically derived using a limited amount of data, these schemes work only for restricted ranges of link parameters. With the first two models, for example, the frequency and elevation angle parameters are restricted to 0.8 to 2.7 GHz and 20 to 60 degrees, respectively. Recently measured data have enabled us to enhance the range of the first two schemes. Moreover, for convenience, they have been combined into a single scheme named the extended empirical roadside shadowing (EERS) model.
Exploring predictive performance: A reanalysis of the geospace model transition challenge
NASA Astrophysics Data System (ADS)
Welling, D. T.; Anderson, B. J.; Crowley, G.; Pulkkinen, A. A.; Rastätter, L.
2017-01-01
The Pulkkinen et al. (2013) study evaluated the ability of five different geospace models to predict surface dB/dt as a function of upstream solar drivers. This was an important step in the assessment of research models for predicting and ultimately preventing the damaging effects of geomagnetically induced currents. Many questions remain concerning the capabilities of these models. This study presents a reanalysis of the Pulkkinen et al. (2013) results in an attempt to better understand the models' performance. The range of validity of the models is determined by examining the conditions corresponding to the empirical input data. It is found that the empirical conductance models on which global magnetohydrodynamic models rely are frequently used outside the limits of their input data. The prediction error for the models is sorted as a function of solar driving and geomagnetic activity. It is found that all models show a bias toward underprediction, especially during active times. These results have implications for future research aimed at improving operational forecast models.
Rethinking Indian monsoon rainfall prediction in the context of recent global warming
NASA Astrophysics Data System (ADS)
Wang, Bin; Xiang, Baoqiang; Li, Juan; Webster, Peter J.; Rajeevan, Madhavan N.; Liu, Jian; Ha, Kyung-Ja
2015-05-01
Prediction of Indian summer monsoon rainfall (ISMR) is at the heart of tropical climate prediction. Despite enormous progress having been made in predicting ISMR since 1886, the operational forecasts during recent decades (1989-2012) have little skill. Here we show, with both dynamical and physical-empirical models, that this recent failure is largely due to the models' inability to capture new predictability sources emerging during recent global warming, that is, the development of the central-Pacific El Nino-Southern Oscillation (CP-ENSO), the rapid deepening of the Asian Low and the strengthening of North and South Pacific Highs during boreal spring. A physical-empirical model that captures these new predictors can produce an independent forecast skill of 0.51 for 1989-2012 and a 92-year retrospective forecast skill of 0.64 for 1921-2012. The recent low skills of the dynamical models are attributed to deficiencies in capturing the developing CP-ENSO and anomalous Asian Low. The results reveal a considerable gap between ISMR prediction skill and predictability.
A. Weiskittel; D. Maguire; R. Monserud
2007-01-01
Hybrid models offer the opportunity to improve future growth projections by combining advantages of both empirical and process-based modeling approaches. Hybrid models have been constructed in several regions and their performance relative to a purely empirical approach has varied. A hybrid model was constructed for intensively managed Douglas-fir plantations in the...
NASA Technical Reports Server (NTRS)
Desai, S.; Wahr, J.
1998-01-01
Empirical models of the two largest constituents of the long-period ocean tides, the monthly and the fortnightly constituents, are estimated from repeat cycles 10 to 210 of the TOPEX/POSEIDON (T/P) mission.
Predicting the Magnetic Properties of ICMEs: A Pragmatic View
NASA Astrophysics Data System (ADS)
Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.
2017-12-01
The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.
NASA Astrophysics Data System (ADS)
Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan
2016-10-01
Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.
Entrance and exit region friction factor models for annular seal analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Elrod, David Alan
1988-01-01
The Mach number definition and boundary conditions in Nelson's nominally-centered, annular gas seal analysis are revised. A method is described for determining the wall shear stress characteristics of an annular gas seal experimentally. Two friction factor models are developed for annular seal analysis; one model is based on flat-plate flow theory; the other uses empirical entrance and exit region friction factors. The friction factor predictions of the models are compared to experimental results. Each friction model is used in an annular gas seal analysis. The seal characteristics predicted by the two seal analyses are compared to experimental results and to the predictions of Nelson's analysis. The comparisons are for smooth-rotor seals with smooth and honeycomb stators. The comparisons show that the analysis which uses empirical entrance and exit region shear stress models predicts the static and stability characteristics of annular gas seals better than the other analyses. The analyses predict direct stiffness poorly.
NASA Astrophysics Data System (ADS)
Phillips, C. B.; Jerolmack, D. J.
2017-12-01
Understanding when coarse sediment begins to move in a river is essential for linking rivers to the evolution of mountainous landscapes. Unfortunately, the threshold of surface particle motion is notoriously difficult to measure in the field. However, recent studies have shown that the threshold of surface motion is empirically correlated with channel slope, a property that is easy to measure and readily available from the literature. These studies have thoroughly examined the mechanistic underpinnings behind the observed correlation and produced suitably complex models. These models are difficult to implement for natural rivers using widely available data, and thus others have treated the empirical regression between slope and the threshold of motion as a predictive model. We note that none of the authors of the original studies exploring this correlation suggested their empirical regressions be used in a predictive fashion, nevertheless these regressions between slope and the threshold of motion have found their way into numerous recent studies engendering potentially spurious conclusions. We demonstrate that there are two significant problems with using these empirical equations for prediction: (1) the empirical regressions are based on a limited sampling of the phase space of bed-load rivers and (2) the empirical measurements of bankfull and critical shear stresses are paired. The upshot of these problems limits the empirical relations predictive capacity to field sites drawn from the same region of the bed-load river phase space and that the paired nature of the data introduces a spurious correlation when considering the ratio of bankfull to critical shear stress. Using a large compilation of bed-load river hydraulic geometry data, we demonstrate that the variation within independently measured values of the threshold of motion changes systematically with bankfull shields stress and not channel slope. Additionally, we highlight using several recent datasets the potential pitfalls that one can encounter when using simplistic empirical regressions to predict the threshold of motion showing that while these concerns could be construed as subtle the resulting implications can be substantial.
Predicting plot soil loss by empirical and process-oriented approaches: A review
USDA-ARS?s Scientific Manuscript database
Soil erosion directly affects the quality of the soil, its agricultural productivity and its biological diversity. Many mathematical models have been developed to estimate plot soil erosion at different temporal scales. At present, empirical soil loss equations and process-oriented models are consid...
Nevers, Meredith B.; Whitman, Richard L.
2011-01-01
Efforts to improve public health protection in recreational swimming waters have focused on obtaining real-time estimates of water quality. Current monitoring techniques rely on the time-intensive culturing of fecal indicator bacteria (FIB) from water samples, but rapidly changing FIB concentrations result in management errors that lead to the public being exposed to high FIB concentrations (type II error) or beaches being closed despite acceptable water quality (type I error). Empirical predictive models may provide a rapid solution, but their effectiveness at improving health protection has not been adequately assessed. We sought to determine if emerging monitoring approaches could effectively reduce risk of illness exposure by minimizing management errors. We examined four monitoring approaches (inactive, current protocol, a single predictive model for all beaches, and individual models for each beach) with increasing refinement at 14 Chicago beaches using historical monitoring and hydrometeorological data and compared management outcomes using different standards for decision-making. Predictability (R2) of FIB concentration improved with model refinement at all beaches but one. Predictive models did not always reduce the number of management errors and therefore the overall illness burden. Use of a Chicago-specific single-sample standard-rather than the default 235 E. coli CFU/100 ml widely used-together with predictive modeling resulted in the greatest number of open beach days without any increase in public health risk. These results emphasize that emerging monitoring approaches such as empirical models are not equally applicable at all beaches, and combining monitoring approaches may expand beach access.
The rate of bubble growth in a superheated liquid in pool boiling
NASA Astrophysics Data System (ADS)
Abdollahi, Mohammad Reza; Jafarian, Mehdi; Jamialahmadi, Mohammad
2017-12-01
A semi-empirical model for the estimation of the rate of bubble growth in nucleate pool boiling is presented, considering a new equation to estimate the temperature history of the bubble in the bulk of liquid. The conservation equations of energy, mass and momentum have been firstly derived and solved analytically. The present analytical model of the bubble growth predicts that the radius of the bubble grows as a function of √{t}.{\\operatorname{erf}}( N√{t}) , while so far the bubble growth rate has been mainly correlated to √{t} in the previous studies. In the next step, the analytical solutions were used to develop a new semi-empirical equation. To achieve this, firstly the analytical solution were non-dimensionalised and then the experimental data, available in the literature, were applied to tune the dimensionless coefficients appeared in the dimensionless equation. Finally, the reliability of the proposed semi-empirical model was assessed through comparison of the model predictions with the available experimental data in the literature, which were not applied in the tuning of the dimensionless parameters of the model. The comparison of the model predictions with other proposed models in the literature was also performed. These comparisons show that this model enables more accurate predictions than previously proposed models with a deviation of less than 10% in a wide range of operating conditions.
Comparing two-zone models of dust exposure.
Jones, Rachael M; Simmons, Catherine E; Boelter, Fred W
2011-09-01
The selection and application of mathematical models to work tasks is challenging. Previously, we developed and evaluated a semi-empirical two-zone model that predicts time-weighted average (TWA) concentrations (Ctwa) of dust emitted during the sanding of drywall joint compound. Here, we fit the emission rate and random air speed variables of a mechanistic two-zone model to testing event data and apply and evaluate the model using data from two field studies. We found that the fitted random air speed values and emission rate were sensitive to (i) the size of the near-field and (ii) the objective function used for fitting, but this did not substantially impact predicted dust Ctwa. The mechanistic model predictions were lower than the semi-empirical model predictions and measured respirable dust Ctwa at Site A but were within an acceptable range. At Site B, a 10.5 m3 room, the mechanistic model did not capture the observed difference between PBZ and area Ctwa. The model predicted uniform mixing and predicted dust Ctwa up to an order of magnitude greater than was measured. We suggest that applications of the mechanistic model be limited to contexts where the near-field volume is very small relative to the far-field volume.
A framework for evaluating forest landscape model predictions using empirical data and knowledge
Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson; William D. Dijak; Qia Wang
2014-01-01
Evaluation of forest landscape model (FLM) predictions is indispensable to establish the credibility of predictions. We present a framework that evaluates short- and long-term FLM predictions at site and landscape scales. Site-scale evaluation is conducted through comparing raster cell-level predictions with inventory plot data whereas landscape-scale evaluation is...
Does Aid to Families with Dependent Children Displace Familial Assistance?
1996-07-01
brief discussion of theoretical models of familial transfers that predict displacement as well as previous empirical studies that have examined this...summarizes the findings. Models of Familial Transfers, and Previous Empirical Studies of Displacement Theoretical Models Several models of private...transfer behavior have been posed, including altruism, exchange, and "warm glow." The altruism model (Becker, 1974; Barro, 1974) states, in terms of
Marto, Aminaton; Jahed Armaghani, Danial; Tonnizam Mohamad, Edy; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches. PMID:25147856
Marto, Aminaton; Hajihassani, Mohsen; Armaghani, Danial Jahed; Mohamad, Edy Tonnizam; Makhtar, Ahmad Mahir
2014-01-01
Flyrock is one of the major disturbances induced by blasting which may cause severe damage to nearby structures. This phenomenon has to be precisely predicted and subsequently controlled through the changing in the blast design to minimize potential risk of blasting. The scope of this study is to predict flyrock induced by blasting through a novel approach based on the combination of imperialist competitive algorithm (ICA) and artificial neural network (ANN). For this purpose, the parameters of 113 blasting operations were accurately recorded and flyrock distances were measured for each operation. By applying the sensitivity analysis, maximum charge per delay and powder factor were determined as the most influential parameters on flyrock. In the light of this analysis, two new empirical predictors were developed to predict flyrock distance. For a comparison purpose, a predeveloped backpropagation (BP) ANN was developed and the results were compared with those of the proposed ICA-ANN model and empirical predictors. The results clearly showed the superiority of the proposed ICA-ANN model in comparison with the proposed BP-ANN model and empirical approaches.
GPP in Loblolly Pine: A Monthly Comparison of Empirical and Process Models
Christopher Gough; John Seiler; Kurt Johnsen; David Arthur Sampson
2002-01-01
Monthly and yearly gross primary productivity (GPP) estimates derived from an empirical and two process based models (3PG and BIOMASS) were compared. Spatial and temporal variation in foliar gas photosynthesis was examined and used to develop GPP prediction models for fertilized nine-year-old loblolly pine (Pinus taeda) stands located in the North...
Evaluation of theoretical and empirical water vapor sorption isotherm models for soils
NASA Astrophysics Data System (ADS)
Arthur, Emmanuel; Tuller, Markus; Moldrup, Per; de Jonge, Lis W.
2016-01-01
The mathematical characterization of water vapor sorption isotherms of soils is crucial for modeling processes such as volatilization of pesticides and diffusive and convective water vapor transport. Although numerous physically based and empirical models were previously proposed to describe sorption isotherms of building materials, food, and other industrial products, knowledge about the applicability of these functions for soils is noticeably lacking. We present an evaluation of nine models for characterizing adsorption/desorption isotherms for a water activity range from 0.03 to 0.93 based on measured data of 207 soils with widely varying textures, organic carbon contents, and clay mineralogy. In addition, the potential applicability of the models for prediction of sorption isotherms from known clay content was investigated. While in general, all investigated models described measured adsorption and desorption isotherms reasonably well, distinct differences were observed between physical and empirical models and due to the different degrees of freedom of the model equations. There were also considerable differences in model performance for adsorption and desorption data. While regression analysis relating model parameters and clay content and subsequent model application for prediction of measured isotherms showed promise for the majority of investigated soils, for soils with distinct kaolinitic and smectitic clay mineralogy predicted isotherms did not closely match the measurements.
Predicting field weed emergence with empirical models and soft computing techniques
USDA-ARS?s Scientific Manuscript database
Seedling emergence is the most important phenological process that influences the success of weed species; therefore, predicting weed emergence timing plays a critical role in scheduling weed management measures. Important efforts have been made in the attempt to develop models to predict seedling e...
Prediction of Partition Coefficients of Organic Compounds between SPME/PDMS and Aqueous Solution
Chao, Keh-Ping; Lu, Yu-Ting; Yang, Hsiu-Wen
2014-01-01
Polydimethylsiloxane (PDMS) is commonly used as the coated polymer in the solid phase microextraction (SPME) technique. In this study, the partition coefficients of organic compounds between SPME/PDMS and the aqueous solution were compiled from the literature sources. The correlation analysis for partition coefficients was conducted to interpret the effect of their physicochemical properties and descriptors on the partitioning process. The PDMS-water partition coefficients were significantly correlated to the polarizability of organic compounds (r = 0.977, p < 0.05). An empirical model, consisting of the polarizability, the molecular connectivity index, and an indicator variable, was developed to appropriately predict the partition coefficients of 61 organic compounds for the training set. The predictive ability of the empirical model was demonstrated by using it on a test set of 26 chemicals not included in the training set. The empirical model, applying the straightforward calculated molecular descriptors, for estimating the PDMS-water partition coefficient will contribute to the practical applications of the SPME technique. PMID:24534804
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
The growth of business firms: theoretical framework and empirical evidence.
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S V; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H Eugene
2005-12-27
We introduce a model of proportional growth to explain the distribution P(g)(g) of business-firm growth rates. The model predicts that P(g)(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent zeta = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships.
The Factor Content of Bilateral Trade: An Empirical Test.
ERIC Educational Resources Information Center
Choi, Yong-Seok; Krishna, Pravin
2004-01-01
The factor proportions model of international trade is one of the most influential theories in international economics. Its central standing in this field has appropriately prompted, particularly recently, intense empirical scrutiny. A substantial and growing body of empirical work has tested the predictions of the theory on the net factor content…
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon.
Muehleisena, Ralph T; Beamer, C Walter; Tinianov, Brandon D
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10,782 Pa s m(-2) in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed.
Measurements and empirical model of the acoustic properties of reticulated vitreous carbon
NASA Astrophysics Data System (ADS)
Muehleisen, Ralph T.; Beamer, C. Walter; Tinianov, Brandon D.
2005-02-01
Reticulated vitreous carbon (RVC) is a highly porous, rigid, open cell carbon foam structure with a high melting point, good chemical inertness, and low bulk thermal conductivity. For the proper design of acoustic devices such as acoustic absorbers and thermoacoustic stacks and regenerators utilizing RVC, the acoustic properties of RVC must be known. From knowledge of the complex characteristic impedance and wave number most other acoustic properties can be computed. In this investigation, the four-microphone transfer matrix measurement method is used to measure the complex characteristic impedance and wave number for 60 to 300 pore-per-inch RVC foams with flow resistivities from 1759 to 10 782 Pa s m-2 in the frequency range of 330 Hz-2 kHz. The data are found to be poorly predicted by the fibrous material empirical model developed by Delany and Bazley, the open cell plastic foam empirical model developed by Qunli, or the Johnson-Allard microstructural model. A new empirical power law model is developed and is shown to provide good predictions of the acoustic properties over the frequency range of measurement. Uncertainty estimates for the constants of the model are also computed. .
Reuning, Gretchen A; Bauerle, William L; Mullen, Jack L; McKay, John K
2015-04-01
Transpiration is controlled by evaporative demand and stomatal conductance (gs ), and there can be substantial genetic variation in gs . A key parameter in empirical models of transpiration is minimum stomatal conductance (g0 ), a trait that can be measured and has a large effect on gs and transpiration. In Arabidopsis thaliana, g0 exhibits both environmental and genetic variation, and quantitative trait loci (QTL) have been mapped. We used this information to create a genetically parameterized empirical model to predict transpiration of genotypes. For the parental lines, this worked well. However, in a recombinant inbred population, the predictions proved less accurate. When based only upon their genotype at a single g0 QTL, genotypes were less distinct than our model predicted. Follow-up experiments indicated that both genotype by environment interaction and a polygenic inheritance complicate the application of genetic effects into physiological models. The use of ecophysiological or 'crop' models for predicting transpiration of novel genetic lines will benefit from incorporating further knowledge of the genetic control and degree of independence of core traits/parameters underlying gs variation. © 2014 John Wiley & Sons Ltd.
Draft user's guide for UDOT mechanistic-empirical pavement design.
DOT National Transportation Integrated Search
2009-10-01
Validation of the new AASHTO Mechanistic-Empirical Pavement Design Guides (MEPDG) nationally calibrated pavement distress and smoothness prediction models when applied under Utah conditions, and local calibration of the new hot-mix asphalt (HMA) p...
Mishra, U.; Jastrow, J.D.; Matamala, R.; Hugelius, G.; Koven, C.D.; Harden, Jennifer W.; Ping, S.L.; Michaelson, G.J.; Fan, Z.; Miller, R.M.; McGuire, A.D.; Tarnocai, C.; Kuhry, P.; Riley, W.J.; Schaefer, K.; Schuur, E.A.G.; Jorgenson, M.T.; Hinzman, L.D.
2013-01-01
The vast amount of organic carbon (OC) stored in soils of the northern circumpolar permafrost region is a potentially vulnerable component of the global carbon cycle. However, estimates of the quantity, decomposability, and combustibility of OC contained in permafrost-region soils remain highly uncertain, thereby limiting our ability to predict the release of greenhouse gases due to permafrost thawing. Substantial differences exist between empirical and modeling estimates of the quantity and distribution of permafrost-region soil OC, which contribute to large uncertainties in predictions of carbon–climate feedbacks under future warming. Here, we identify research challenges that constrain current assessments of the distribution and potential decomposability of soil OC stocks in the northern permafrost region and suggest priorities for future empirical and modeling studies to address these challenges.
An empirical propellant response function for combustion stability predictions
NASA Technical Reports Server (NTRS)
Hessler, R. O.
1980-01-01
An empirical response function model was developed for ammonium perchlorate propellants to supplant T-burner testing at the preliminary design stage. The model was developed by fitting a limited T-burner data base, in terms of oxidizer size and concentration, to an analytical two parameter response function expression. Multiple peaks are predicted, but the primary effect is of a single peak for most formulations, with notable bulges for the various AP size fractions. The model was extended to velocity coupling with the assumption that dynamic response was controlled primarily by the solid phase described by the two parameter model. The magnitude of velocity coupling was then scaled using an erosive burning law. Routine use of the model for stability predictions on a number of propulsion units indicates that the model tends to overpredict propellant response. It is concluded that the model represents a generally conservative prediction tool, suited especially for the preliminary design stage when T-burner data may not be readily available. The model work included development of a rigorous summation technique for pseudopropellant properties and of a concept for modeling ordered packing of particulates.
NASA Astrophysics Data System (ADS)
Pulkkinen, A.
2012-12-01
Empirical modeling has been the workhorse of the past decades in predicting the state of the geospace. For example, numerous empirical studies have shown that global geoeffectiveness indices such as Kp and Dst are generally well predictable from the solar wind input. These successes have been facilitated partly by the strongly externally driven nature of the system. Although characterizing the general state of the system is valuable and empirical modeling will continue playing an important role, refined physics-based quantification of the state of the system has been the obvious next step in moving toward more mature science. Importantly, more refined and localized products are needed also for space weather purposes. Predictions of local physical quantities are necessary to make physics-based links to the impacts on specific systems. As we have introduced more localized predictions of the geospace state one central question is how predictable these local quantities are? This complex question can be addressed by rigorously measuring the model performance against the observed data. Space sciences community has made great advanced on this topic over the past few years and there are ongoing efforts in SHINE, CEDAR and GEM to carry out community-wide evaluations of the state-of-the-art solar and heliospheric, ionosphere-thermosphere and geospace models, respectively. These efforts will help establish benchmarks and thus provide means to measure the progress in the field analogous to monitoring of the improvement in lower atmospheric weather predictions carried out rigorously since 1980s. In this paper we will discuss some of the latest advancements in predicting the local geospace parameters and give an overview of some of the community efforts to rigorously measure the model performances. We will also briefly discuss some of the future opportunities for advancing the geospace modeling capability. These will include further development in data assimilation and ensemble modeling (e.g. taking into account uncertainty in the inflow boundary conditions).
Induced Innovation and Social Inequality: Evidence from Infant Medical Care.
Cutler, David M; Meara, Ellen; Richards-Shubik, Seth
2012-01-01
We develop a model of induced innovation that applies to medical research. Our model yields three empirical predictions. First, initial death rates and subsequent research effort should be positively correlated. Second, research effort should be associated with more rapid mortality declines. Third, as a byproduct of targeting the most common conditions in the population as a whole, induced innovation leads to growth in mortality disparities between minority and majority groups. Using information on infant deaths in the U.S. between 1983 and 1998, we find support for all three empirical predictions.
Extended-Range Prediction with Low-Dimensional, Stochastic-Dynamic Models: A Data-driven Approach
2012-09-30
characterization of extratropical storms and extremes and link these to LFV modes. Mingfang Ting, Yochanan Kushnir, Andrew W. Robertson...simulating and predicting a wide range of climate phenomena including ENSO, tropical Atlantic sea surface temperatures (SSTs), storm track variability...into empirical prediction models. Use observations to improve low-order dynamical MJO models. Adam Sobel, Daehyun Kim. Extratropical variability
Monte, Luigi
2014-08-01
This work presents and discusses the results of an application of the contaminant migration models implemented in the decision support system MOIRA-PLUS to simulate the time behaviour of the concentrations of (137)Cs of Chernobyl origin in water and fish of the Baltic Sea. The results of the models were compared with the extensive sets of highly reliable empirical data of radionuclide contamination available from international databases and covering a period of, approximately, twenty years. The model application involved three main phases: a) the customisation performed by using hydrological, morphometric and water circulation data obtained from the literature; b) a blind test of the model results, in the sense that the models made use of default values of the migration parameters to predict the dynamics of the contaminant in the environmental components; and c) the adjustment of the model parameter values to improve the agreement of the predictions with the empirical data. The results of the blind test showed that the models successfully predicted the empirical contamination values within the expected range of uncertainty of the predictions (confidence level at 68% of approximately a factor 2). The parameter adjustment can be helpful for the assessment of the fluxes of water circulating among the main sub-basins of the Baltic Sea, substantiating the usefulness of radionuclides to trace the movement of masses of water in seas. Copyright © 2014 Elsevier Ltd. All rights reserved.
Some Empirical Evidence for Latent Trait Model Selection.
ERIC Educational Resources Information Center
Hutten, Leah R.
The results of this study suggest that for purposes of estimating ability by latent trait methods, the Rasch model compares favorably with the three-parameter logistic model. Using estimated parameters to make predictions about 25 actual number-correct score distributions with samples of 1,000 cases each, those predicted by the Rasch model fit the…
Foundations for computer simulation of a low pressure oil flooded single screw air compressor
NASA Astrophysics Data System (ADS)
Bein, T. W.
1981-12-01
The necessary logic to construct a computer model to predict the performance of an oil flooded, single screw air compressor is developed. The geometric variables and relationships used to describe the general single screw mechanism are developed. The governing equations to describe the processes are developed from their primary relationships. The assumptions used in the development are also defined and justified. The computer model predicts the internal pressure, temperature, and flowrates through the leakage paths throughout the compression cycle of the single screw compressor. The model uses empirical external values as the basis for the internal predictions. The computer values are compared to the empirical values, and conclusions are drawn based on the results. Recommendations are made for future efforts to improve the computer model and to verify some of the conclusions that are drawn.
Base drag prediction on missile configurations
NASA Technical Reports Server (NTRS)
Moore, F. G.; Hymer, T.; Wilcox, F.
1993-01-01
New wind tunnel data have been taken, and a new empirical model has been developed for predicting base drag on missile configurations. The new wind tunnel data were taken at NASA-Langley in the Unitary Wind Tunnel at Mach numbers from 2.0 to 4.5, angles of attack to 16 deg, fin control deflections up to 20 deg, fin thickness/chord of 0.05 to 0.15, and fin locations from 'flush with the base' to two chord-lengths upstream of the base. The empirical model uses these data along with previous wind tunnel data, estimating base drag as a function of all these variables as well as boat-tail and power-on/power-off effects. The new model yields improved accuracy, compared to wind tunnel data. The new model also is more robust due to inclusion of additional variables. On the other hand, additional wind tunnel data are needed to validate or modify the current empirical model in areas where data are not available.
Predicting overload-affected fatigue crack growth in steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skorupa, M.; Skorupa, A.; Ladecki, B.
1996-12-01
The ability of semi-empirical crack closure models to predict the effect of overloads on fatigue crack growth in low-alloy steels has been investigated. With this purpose, the CORPUS model developed for aircraft metals and spectra has been checked first through comparisons between the simulated and observed results for a low-alloy steel. The CORPUS predictions of crack growth under several types of simple load histories containing overloads appeared generally unconservative which prompted the authors to formulate a new model, more suitable for steels. With the latter approach, the assumed evolution of the crack opening stress during the delayed retardation stage hasmore » been based on experimental results reported for various steels. For all the load sequences considered, the predictions from the proposed model appeared to be by far more accurate than those from CORPUS. Based on the analysis results, the capability of semi-empirical prediction concepts to cover experimentally observed trends that have been reported for sequences with overloads is discussed. Finally, possibilities of improving the model performance are considered.« less
Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model
ERIC Educational Resources Information Center
de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.
2011-01-01
Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…
NASA Technical Reports Server (NTRS)
Sojka, J. J.; Schunk, R. W.; Hoegy, W. R.; Grebowsky, J. M.
1991-01-01
The polar ionospheric F-region often exhibits regions of marked density depletion. These depletions have been observed by a variety of polar orbiting ionospheric satellites over a full range of solar cycle, season, magnetic activity, and universal time (UT). An empirical model of these observations has recently been developed to describe the polar depletion dependence on these parameters. Specifically, the dependence has been defined as a function of F10.7 (solar), summer or winter, Kp (magnetic), and UT. Polar cap depletions have also been predicted /1, 2/ and are, hence, present in physical models of the high latitude ionosphere. Using the Utah State University Time Dependent Ionospheric Model (TDIM) the predicted polar depletion characteristics are compared with those described by the above empirical model. In addition, the TDIM is used to predict the IMF By dependence of the polar hole feature.
A generalized preferential attachment model for business firms growth rates. I. Empirical evidence
NASA Astrophysics Data System (ADS)
Pammolli, F.; Fu, D.; Buldyrev, S. V.; Riccaboni, M.; Matia, K.; Yamasaki, K.; Stanley, H. E.
2007-05-01
We introduce a model of proportional growth to explain the distribution P(g) of business firm growth rates. The model predicts that P(g) is Laplace in the central part and depicts an asymptotic power-law behavior in the tails with an exponent ζ = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. We test the model at different levels of aggregation in the economy, from products, to firms, to countries, and we find that the predictions are in good agreement with empirical evidence on both growth distributions and size-variance relationships.
From the Cover: The growth of business firms: Theoretical framework and empirical evidence
NASA Astrophysics Data System (ADS)
Fu, Dongfeng; Pammolli, Fabio; Buldyrev, S. V.; Riccaboni, Massimo; Matia, Kaushik; Yamasaki, Kazuko; Stanley, H. Eugene
2005-12-01
We introduce a model of proportional growth to explain the distribution Pg(g) of business-firm growth rates. The model predicts that Pg(g) is exponential in the central part and depicts an asymptotic power-law behavior in the tails with an exponent = 3. Because of data limitations, previous studies in this field have been focusing exclusively on the Laplace shape of the body of the distribution. In this article, we test the model at different levels of aggregation in the economy, from products to firms to countries, and we find that the predictions of the model agree with empirical growth distributions and size-variance relationships. proportional growth | preferential attachment | Laplace distribution
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
NASA Technical Reports Server (NTRS)
Campbell, J. W. (Editor)
1981-01-01
The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.
A study of fault prediction and reliability assessment in the SEL environment
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Patnaik, Debabrata
1986-01-01
An empirical study on estimation and prediction of faults, prediction of fault detection and correction effort, and reliability assessment in the Software Engineering Laboratory environment (SEL) is presented. Fault estimation using empirical relationships and fault prediction using curve fitting method are investigated. Relationships between debugging efforts (fault detection and correction effort) in different test phases are provided, in order to make an early estimate of future debugging effort. This study concludes with the fault analysis, application of a reliability model, and analysis of a normalized metric for reliability assessment and reliability monitoring during development of software.
Modified empirical Solar Radiation Pressure model for IRNSS constellation
NASA Astrophysics Data System (ADS)
Rajaiah, K.; Manamohan, K.; Nirmala, S.; Ratnakara, S. C.
2017-11-01
Navigation with Indian Constellation (NAVIC) also known as Indian Regional Navigation Satellite System (IRNSS) is India's regional navigation system designed to provide position accuracy better than 20 m over India and the region extending to 1500 km around India. The reduced dynamic precise orbit estimation is utilized to determine the orbit broadcast parameters for IRNSS constellation. The estimation is mainly affected by the parameterization of dynamic models especially Solar Radiation Pressure (SRP) model which is a non-gravitational force depending on shape and attitude dynamics of the spacecraft. An empirical nine parameter solar radiation pressure model is developed for IRNSS constellation, using two-way range measurements from IRNSS C-band ranging system. The paper addresses the development of modified SRP empirical model for IRNSS (IRNSS SRP Empirical Model, ISEM). The performance of the ISEM was assessed based on overlap consistency, long term prediction, Satellite Laser Ranging (SLR) residuals and compared with ECOM9, ECOM5 and new-ECOM9 models developed by Center for Orbit Determination in Europe (CODE). For IRNSS Geostationary Earth Orbit (GEO) and Inclined Geosynchronous Orbit (IGSO) satellites, ISEM has shown promising results with overlap RMS error better than 5.3 m and 3.5 m respectively. Long term orbit prediction using numerical integration has improved with error better than 80%, 26% and 7.8% in comparison to ECOM9, ECOM5 and new-ECOM9 respectively. Further, SLR based orbit determination with ISEM shows 70%, 47% and 39% improvement over 10 days orbit prediction in comparison to ECOM9, ECOM5 and new-ECOM9 respectively and also highlights the importance of wide baseline tracking network.
NASA Astrophysics Data System (ADS)
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter; Edwards, Benjamin
2016-04-01
The current practice of deriving empirical ground motion prediction equations (GMPEs) involves using ground motions recorded at multiple sites. However, in applications like site-specific (e.g., critical facility) hazard ground motions obtained from the GMPEs are need to be adjusted/corrected to a particular site/site-condition under investigation. This study presents a complete framework for developing a response spectral GMPE, within which the issue of adjustment of ground motions is addressed in a manner consistent with the linear system framework. The present approach is a two-step process in which the first step consists of deriving two separate empirical models, one for Fourier amplitude spectra (FAS) and the other for a random vibration theory (RVT) optimized duration (Drvto) of ground motion. In the second step the two models are combined within the RVT framework to obtain full response spectral amplitudes. Additionally, the framework also involves a stochastic model based extrapolation of individual Fourier spectra to extend the useable frequency limit of the empirically derived FAS model. The stochastic model parameters were determined by inverting the Fourier spectral data using an approach similar to the one as described in Edwards and Faeh (2013). Comparison of median predicted response spectra from present approach with those from other regional GMPEs indicates that the present approach can also be used as a stand-alone model. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, the Middle East and the Mediterranean region.
Applicability of empirical data currently used in predicting solid propellant exhaust plumes
NASA Technical Reports Server (NTRS)
Tevepaugh, J. A.; Smith, S. D.; Penny, M. M.; Greenwood, T.; Roberts, B. B.
1977-01-01
Theoretical and experimental approaches to exhaust plume analysis are compared. A two-phase model is extended to include treatment of reacting gas chemistry, and thermodynamical modeling of the gaseous phase of the flow field is considered. The applicability of empirical data currently available to define particle drag coefficients, heat transfer coefficients, mean particle size, and particle size distributions is investigated. Experimental and analytical comparisons are presented for subscale solid rocket motors operating at three altitudes with attention to pitot total pressure and stagnation point heating rate measurements. The mathematical treatment input requirements are explained. The two-phase flow field solution adequately predicts gasdynamic properties in the inviscid portion of two-phase exhaust plumes. It is found that prediction of exhaust plume gas pressures requires an adequate model of flow field dynamics.
Validation of pavement performance curves for the mechanistic-empirical pavement design guide.
DOT National Transportation Integrated Search
2009-02-01
The objective of this research is to determine whether the nationally calibrated performance models used in the Mechanistic-Empirical : Pavement Design Guide (MEPDG) provide a reasonable prediction of actual field performance, and if the desired accu...
Wenchi Jin; Hong S. He; Frank R. Thompson
2016-01-01
Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...
Fateen, Seif-Eddeen K; Khalil, Menna M; Elnabawy, Ahmed O
2013-03-01
Peng-Robinson equation of state is widely used with the classical van der Waals mixing rules to predict vapor liquid equilibria for systems containing hydrocarbons and related compounds. This model requires good values of the binary interaction parameter kij . In this work, we developed a semi-empirical correlation for kij partly based on the Huron-Vidal mixing rules. We obtained values for the adjustable parameters of the developed formula for over 60 binary systems and over 10 categories of components. The predictions of the new equation system were slightly better than the constant-kij model in most cases, except for 10 systems whose predictions were considerably improved with the new correlation.
Variable Density Multilayer Insulation for Cryogenic Storage
NASA Technical Reports Server (NTRS)
Hedayat, A.; Brown, T. M.; Hastings, L. J.; Martin, J.
2000-01-01
Two analytical models for a foam/Variable Density Multi-Layer Insulation (VD-MLI) system performance are discussed. Both models are one-dimensional and contain three heat transfer mechanisms, namely conduction through the spacer material, radiation between the shields, and conduction through the gas. One model is based on the methodology developed by McIntosh while the other model is based on the Lockheed semi-empirical approach. All models input variables are based on the Multi-purpose Hydrogen Test Bed (MHTB) geometry and available values for material properties and empirical solid conduction coefficient. Heat flux predictions are in good agreement with the MHTB data, The heat flux predictions are presented for the foam/MLI combinations with 30, 45, 60, and 75 MLI layers
Prediction of Meiyu rainfall in Taiwan by multi-lead physical-empirical models
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen; Lu, Mong-Ming
2015-06-01
Taiwan is located at the dividing point of the tropical and subtropical monsoons over East Asia. Taiwan has double rainy seasons, the Meiyu in May-June and the Typhoon rains in August-September. To predict the amount of Meiyu rainfall is of profound importance to disaster preparedness and water resource management. The seasonal forecast of May-June Meiyu rainfall has been a challenge to current dynamical models and the factors controlling Taiwan Meiyu variability has eluded climate scientists for decades. Here we investigate the physical processes that are possibly important for leading to significant fluctuation of the Taiwan Meiyu rainfall. Based on this understanding, we develop a physical-empirical model to predict Taiwan Meiyu rainfall at a lead time of 0- (end of April), 1-, and 2-month, respectively. Three physically consequential and complementary predictors are used: (1) a contrasting sea surface temperature (SST) tendency in the Indo-Pacific warm pool, (2) the tripolar SST tendency in North Atlantic that is associated with North Atlantic Oscillation, and (3) a surface warming tendency in northeast Asia. These precursors foreshadow an enhanced Philippine Sea anticyclonic anomalies and the anomalous cyclone near the southeastern China in the ensuing summer, which together favor increasing Taiwan Meiyu rainfall. Note that the identified precursors at various lead-times represent essentially the same physical processes, suggesting the robustness of the predictors. The physical empirical model made by these predictors is capable of capturing the Taiwan rainfall variability with a significant cross-validated temporal correlation coefficient skill of 0.75, 0.64, and 0.61 for 1979-2012 at the 0-, 1-, and 2-month lead time, respectively. The physical-empirical model concept used here can be extended to summer monsoon rainfall prediction over the Southeast Asia and other regions.
Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T
2014-12-01
The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (P<0.05), and sows fed pectin residue had a greater CY than potato pulp-fed sows (P<0.05). Prefarrowing diets affected neither CI nor CY, but the prefarrowing diet with coconut oil decreased lactose and increased DM concentrations of colostrum compared with other prefarrowing diets (P<0.05). In conclusion, the new mechanistic predictive model for CI suggests that the previous empirical predictive model underestimates CI of sow-reared piglets by 30%. It was also concluded that nutrition of sows during gestation affected CY and colostrum composition.
Mandija, Stefano; Sommer, Iris E. C.; van den Berg, Cornelis A. T.; Neggers, Sebastiaan F. W.
2017-01-01
Background Despite TMS wide adoption, its spatial and temporal patterns of neuronal effects are not well understood. Although progress has been made in predicting induced currents in the brain using realistic finite element models (FEM), there is little consensus on how a magnetic field of a typical TMS coil should be modeled. Empirical validation of such models is limited and subject to several limitations. Methods We evaluate and empirically validate models of a figure-of-eight TMS coil that are commonly used in published modeling studies, of increasing complexity: simple circular coil model; coil with in-plane spiral winding turns; and finally one with stacked spiral winding turns. We will assess the electric fields induced by all 3 coil models in the motor cortex using a computer FEM model. Biot-Savart models of discretized wires were used to approximate the 3 coil models of increasing complexity. We use a tailored MR based phase mapping technique to get a full 3D validation of the incident magnetic field induced in a cylindrical phantom by our TMS coil. FEM based simulations on a meshed 3D brain model consisting of five tissues types were performed, using two orthogonal coil orientations. Results Substantial differences in the induced currents are observed, both theoretically and empirically, between highly idealized coils and coils with correctly modeled spiral winding turns. Thickness of the coil winding turns affect minimally the induced electric field, and it does not influence the predicted activation. Conclusion TMS coil models used in FEM simulations should include in-plane coil geometry in order to make reliable predictions of the incident field. Modeling the in-plane coil geometry is important to correctly simulate the induced electric field and to correctly make reliable predictions of neuronal activation PMID:28640923
NASA Technical Reports Server (NTRS)
Schonberg, William P.; Mohamed, Essam
1997-01-01
This report presents the results of a study whose objective was to develop first-principles-based models of hole size and maximum tip-to-tip crack length for a spacecraft module pressure wall that has been perforated in an orbital debris particle impact. The hole size and crack length models are developed by sequentially characterizing the phenomena comprising the orbital debris impact event, including the initial impact, the creation and motion of a debris cloud within the dual-wall system, the impact of the debris cloud on the pressure wall, the deformation of the pressure wall due to debris cloud impact loading prior to crack formation, pressure wall crack initiation, propagation, and arrest, and finally pressure wall deformation following crack initiation and growth. The model development has been accomplished through the application of elementary shock physics and thermodynamic theory, as well as the principles of mass, momentum, and energy conservation. The predictions of the model developed herein are compared against the predictions of empirically-based equations for hole diameters and maximum tip-to-tip crack length for three International Space Station wall configurations. The ISS wall systems considered are the baseline U.S. Lab Cylinder, the enhanced U.S. Lab Cylinder, and the U.S. Lab Endcone. The empirical predictor equations were derived from experimentally obtained hole diameters and crack length data. The original model predictions did not compare favorably with the experimental data, especially for cases in which pressure wall petalling did not occur. Several modifications were made to the original model to bring its predictions closer in line with the experimental results. Following the adjustment of several empirical constants, the predictions of the modified analytical model were in much closer agreement with the experimental results.
Prakash Nepal; Peter J. Ince; Kenneth E. Skog; Sun J. Chang
2012-01-01
This paper describes a set of empirical net forest growth models based on forest growing-stock density relationships for three U.S. regions (North, South, and West) and two species groups (softwoods and hardwoods) at the regional aggregate level. The growth models accurately predict historical U.S. timber inventory trends when we incorporate historical timber harvests...
Probabilistic empirical prediction of seasonal climate: evaluation and potential applications
NASA Astrophysics Data System (ADS)
Dieppois, B.; Eden, J.; van Oldenborgh, G. J.
2017-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a new evaluation of an established empirical system used to predict seasonal climate across the globe. Forecasts for surface air temperature, precipitation and sea level pressure are produced by the KNMI Probabilistic Empirical Prediction (K-PREP) system every month and disseminated via the KNMI Climate Explorer (climexp.knmi.nl). K-PREP is based on multiple linear regression and built on physical principles to the fullest extent with predictive information taken from the global CO2-equivalent concentration, large-scale modes of variability in the climate system and regional-scale information. K-PREP seasonal forecasts for the period 1981-2016 will be compared with corresponding dynamically generated forecasts produced by operational forecast systems. While there are many regions of the world where empirical forecast skill is extremely limited, several areas are identified where K-PREP offers comparable skill to dynamical systems. We discuss two key points in the future development and application of the K-PREP system: (a) the potential for K-PREP to provide a more useful basis for reference forecasts than those based on persistence or climatology, and (b) the added value of including K-PREP forecast information in multi-model forecast products, at least for known regions of good skill. We also discuss the potential development of stakeholder-driven applications of the K-PREP system, including empirical forecasts for circumboreal fire activity.
Integrating Empirical-Modeling Approaches to Improve Understanding of Terrestrial Ecology Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCarthy, Heather; Luo, Yiqi; Wullschleger, Stan D
Recent decades have seen tremendous increases in the quantity of empirical ecological data collected by individual investigators, as well as through research networks such as FLUXNET (Baldocchi et al., 2001). At the same time, advances in computer technology have facilitated the development and implementation of large and complex land surface and ecological process models. Separately, each of these information streams provides useful, but imperfect information about ecosystems. To develop the best scientific understanding of ecological processes, and most accurately predict how ecosystems may cope with global change, integration of empirical and modeling approaches is necessary. However, true integration - inmore » which models inform empirical research, which in turn informs models (Fig. 1) - is not yet common in ecological research (Luo et al., 2011). The goal of this workshop, sponsored by the Department of Energy, Office of Science, Biological and Environmental Research (BER) program, was to bring together members of the empirical and modeling communities to exchange ideas and discuss scientific practices for increasing empirical - model integration, and to explore infrastructure and/or virtual network needs for institutionalizing empirical - model integration (Yiqi Luo, University of Oklahoma, Norman, OK, USA). The workshop included presentations and small group discussions that covered topics ranging from model-assisted experimental design to data driven modeling (e.g. benchmarking and data assimilation) to infrastructure needs for empirical - model integration. Ultimately, three central questions emerged. How can models be used to inform experiments and observations? How can experimental and observational results be used to inform models? What are effective strategies to promote empirical - model integration?« less
Eric J. Gustafson
2013-01-01
Researchers and natural resource managers need predictions of how multiple global changes (e.g., climate change, rising levels of air pollutants, exotic invasions) will affect landscape composition and ecosystem function. Ecological predictive models used for this purpose are constructed using either a mechanistic (process-based) or a phenomenological (empirical)...
An Empirical Approach to Predicting Effects of Climate Change on Stream Water Chemistry
NASA Astrophysics Data System (ADS)
Olson, J. R.; Hawkins, C. P.
2014-12-01
Climate change may affect stream solute concentrations by three mechanisms: dilution associated with increased precipitation, evaporative concentration associated with increased temperature, and changes in solute inputs associated with changes in climate-driven weathering. We developed empirical models predicting base-flow water chemistry from watershed geology, soils, and climate for 1975 individual stream sites across the conterminous USA. We then predicted future solute concentrations (2065 and 2099) by applying down-scaled global climate model predictions to these models. The electrical conductivity model (EC, model R2 = 0.78) predicted mean increases in EC of 19 μS/cm by 2065 and 40 μS/cm by 2099. However predicted responses for individual streams ranged from a 43% decrease to a 4x increase. Streams with the greatest predicted decreases occurred in the southern Rocky Mountains and Mid-West, whereas southern California and Sierra Nevada streams showed the greatest increases. Generally, streams in dry areas underlain by non-calcareous rocks were predicted to be the most vulnerable to increases in EC associated with climate change. Predicted changes in other water chemistry parameters (e.g., Acid Neutralization Capacity (ANC), SO4, and Ca) were similar to EC, although the magnitude of ANC and SO4 change was greater. Predicted changes in ANC and SO4 are in general agreement with those changes already observed in seven locations with long term records.
Integrating WEPP into the WEPS infrastructure
USDA-ARS?s Scientific Manuscript database
The Wind Erosion Prediction System (WEPS) and the Water Erosion Prediction Project (WEPP) share a common modeling philosophy, that of moving away from primarily empirically based models based on indices or "average conditions", and toward a more process based approach which can be evaluated using ac...
Tedeschi, L O; Seo, S; Fox, D G; Ruiz, R
2006-12-01
Current ration formulation systems used to formulate diets on farms and to evaluate experimental data estimate metabolizable energy (ME)-allowable and metabolizable protein (MP)-allowable milk production from the intake above animal requirements for maintenance, pregnancy, and growth. The changes in body reserves, measured via the body condition score (BCS), are not accounted for in predicting ME and MP balances. This paper presents 2 empirical models developed to adjust predicted diet-allowable milk production based on changes in BCS. Empirical reserves model 1 was based on the reserves model described by the 2001 National Research Council (NRC) Nutrient Requirements of Dairy Cattle, whereas empirical reserves model 2 was developed based on published data of body weight and composition changes in lactating dairy cows. A database containing 134 individually fed lactating dairy cows from 3 trials was used to evaluate these adjustments in milk prediction based on predicted first-limiting ME or MP by the 2001 Dairy NRC and Cornell Net Carbohydrate and Protein System models. The analysis of first-limiting ME or MP milk production without adjustments for BCS changes indicated that the predictions of both models were consistent (r(2) of the regression between observed and model-predicted values of 0.90 and 0.85), had mean biases different from zero (12.3 and 5.34%), and had moderate but different roots of mean square errors of prediction (5.42 and 4.77 kg/d) for the 2001 NRC model and the Cornell Net Carbohydrate and Protein System model, respectively. The adjustment of first-limiting ME- or MP-allowable milk to BCS changes improved the precision and accuracy of both models. We further investigated 2 methods of adjustment; the first method used only the first and last BCS values, whereas the second method used the mean of weekly BCS values to adjust ME- and MP-allowable milk production. The adjustment to BCS changes based on first and last BCS values was more accurate than the adjustment to BCS based on the mean of all BCS values, suggesting that adjusting milk production for mean weekly variations in BCS added more variability to model-predicted milk production. We concluded that both models adequately predicted the first-limiting ME- or MP-allowable milk after adjusting for changes in BCS.
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices. PMID:27293423
Financial Time Series Prediction Using Elman Recurrent Random Neural Networks.
Wang, Jie; Wang, Jun; Fang, Wen; Niu, Hongli
2016-01-01
In recent years, financial market dynamics forecasting has been a focus of economic research. To predict the price indices of stock markets, we developed an architecture which combined Elman recurrent neural networks with stochastic time effective function. By analyzing the proposed model with the linear regression, complexity invariant distance (CID), and multiscale CID (MCID) analysis methods and taking the model compared with different models such as the backpropagation neural network (BPNN), the stochastic time effective neural network (STNN), and the Elman recurrent neural network (ERNN), the empirical results show that the proposed neural network displays the best performance among these neural networks in financial time series forecasting. Further, the empirical research is performed in testing the predictive effects of SSE, TWSE, KOSPI, and Nikkei225 with the established model, and the corresponding statistical comparisons of the above market indices are also exhibited. The experimental results show that this approach gives good performance in predicting the values from the stock market indices.
The Study of Rain Specific Attenuation for the Prediction of Satellite Propagation in Malaysia
NASA Astrophysics Data System (ADS)
Mandeep, J. S.; Ng, Y. Y.; Abdullah, H.; Abdullah, M.
2010-06-01
Specific attenuation is the fundamental quantity in the calculation of rain attenuation for terrestrial path and slant paths representing as rain attenuation per unit distance (dB/km). Specific attenuation is an important element in developing the predicted rain attenuation model. This paper deals with the empirical determination of the power law coefficients which allow calculating the specific attenuation in dB/km from the knowledge of the rain rate in mm/h. The main purpose of the paper is to obtain the coefficients of k and α of power law relationship between specific attenuation. Three years (from 1st January 2006 until 31st December 2008) rain gauge and beacon data taken from USM, Nibong Tebal have been used to do the empirical procedure analysis of rain specific attenuation. The data presented are semi-empirical in nature. A year-to-year variation of the coefficients has been indicated and the empirical measured data was compared with ITU-R provided regression coefficient. The result indicated that the USM empirical measured data was significantly vary from ITU-R predicted value. Hence, ITU-R recommendation for regression coefficients of rain specific attenuation is not suitable for predicting rain attenuation at Malaysia.
ERIC Educational Resources Information Center
Fü rst, Guillaume; Ghisletta, Paolo; Lubart, Todd
2016-01-01
The present work proposes an integrative model of creativity that includes personality traits and cognitive processes. This model hypothesizes that three high-order personality factors predict two main process factors, which in turn predict intensity and achievement of creative activities. The personality factors are: "Plasticity" (high…
Predicting landscape vegetation dynamics using state-and-transition simulation models
Colin J. Daniel; Leonardo Frid
2012-01-01
This paper outlines how state-and-transition simulation models (STSMs) can be used to project changes in vegetation over time across a landscape. STSMs are stochastic, empirical simulation models that use an adapted Markov chain approach to predict how vegetation will transition between states over time, typically in response to interactions between succession,...
Fuel consumption models for pine flatwoods fuel types in the southeastern United States
Clinton S. Wright
2013-01-01
Modeling fire effects, including terrestrial and atmospheric carbon fluxes and pollutant emissions during wildland fires, requires accurate predictions of fuel consumption. Empirical models were developed for predicting fuel consumption from fuel and environmental measurements on a series of operational prescribed fires in pine flatwoods ecosystems in the southeastern...
Empirical models for the prediction of ground motion duration for intraplate earthquakes
NASA Astrophysics Data System (ADS)
Anbazhagan, P.; Neaz Sheikh, M.; Bajaj, Ketan; Mariya Dayana, P. J.; Madhura, H.; Reddy, G. R.
2017-07-01
Many empirical relationships for the earthquake ground motion duration were developed for interplate region, whereas only a very limited number of empirical relationships exist for intraplate region. Also, the existing relationships were developed based mostly on the scaled recorded interplate earthquakes to represent intraplate earthquakes. To the author's knowledge, none of the existing relationships for the intraplate regions were developed using only the data from intraplate regions. Therefore, an attempt is made in this study to develop empirical predictive relationships of earthquake ground motion duration (i.e., significant and bracketed) with earthquake magnitude, hypocentral distance, and site conditions (i.e., rock and soil sites) using the data compiled from intraplate regions of Canada, Australia, Peninsular India, and the central and southern parts of the USA. The compiled earthquake ground motion data consists of 600 records with moment magnitudes ranging from 3.0 to 6.5 and hypocentral distances ranging from 4 to 1000 km. The non-linear mixed-effect (NLMEs) and logistic regression techniques (to account for zero duration) were used to fit predictive models to the duration data. The bracketed duration was found to be decreased with an increase in the hypocentral distance and increased with an increase in the magnitude of the earthquake. The significant duration was found to be increased with the increase in the magnitude and hypocentral distance of the earthquake. Both significant and bracketed durations were predicted higher in rock sites than in soil sites. The predictive relationships developed herein are compared with the existing relationships for interplate and intraplate regions. The developed relationship for bracketed duration predicts lower durations for rock and soil sites. However, the developed relationship for a significant duration predicts lower durations up to a certain distance and thereafter predicts higher durations compared to the existing relationships.
Sexton, Nicholas J; Cooper, Richard P
2017-05-01
Task inhibition (also known as backward inhibition) is an hypothesised form of cognitive inhibition evident in multi-task situations, with the role of facilitating switching between multiple, competing tasks. This article presents a novel cognitive computational model of a backward inhibition mechanism. By combining aspects of previous cognitive models in task switching and conflict monitoring, the model instantiates the theoretical proposal that backward inhibition is the direct result of conflict between multiple task representations. In a first simulation, we demonstrate that the model produces two effects widely observed in the empirical literature, specifically, reaction time costs for both (n-1) task switches and n-2 task repeats. Through a systematic search of parameter space, we demonstrate that these effects are a general property of the model's theoretical content, and not specific parameter settings. We further demonstrate that the model captures previously reported empirical effects of inter-trial interval on n-2 switch costs. A final simulation extends the paradigm of switching between tasks of asymmetric difficulty to three tasks, and generates novel predictions for n-2 repetition costs. Specifically, the model predicts that n-2 repetition costs associated with hard-easy-hard alternations are greater than for easy-hard-easy alternations. Finally, we report two behavioural experiments testing this hypothesis, with results consistent with the model predictions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Near transferable phenomenological n-body potentials for noble metals
NASA Astrophysics Data System (ADS)
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-01
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
Near transferable phenomenological n-body potentials for noble metals.
Pontikis, Vassilis; Baldinozzi, Gianguido; Luneville, Laurence; Simeone, David
2017-09-06
We present a semi-empirical model of cohesion in noble metals with suitable parameters reproducing a selected set of experimental properties of perfect and defective lattices in noble metals. It consists of two short-range, n-body terms accounting respectively for attractive and repulsive interactions, the former deriving from the second moment approximation of the tight-binding scheme and the latter from the gas approximation of the kinetic energy of electrons. The stability of the face centred cubic versus the hexagonal compact stacking is obtained via a long-range, pairwise function of customary use with ionic pseudo-potentials. Lattice dynamics, molecular statics, molecular dynamics and nudged elastic band calculations show that, unlike previous potentials, this cohesion model reproduces and predicts quite accurately thermodynamic properties in noble metals. In particular, computed surface energies, largely underestimated by existing empirical cohesion models, compare favourably with measured values, whereas predicted unstable stacking-fault energy profiles fit almost perfectly ab initio evaluations from the literature. All together the results suggest that this semi-empirical model is nearly transferable.
Rollover risk prediction of heavy vehicles by reliability index and empirical modelling
NASA Astrophysics Data System (ADS)
Sellami, Yamine; Imine, Hocine; Boubezoul, Abderrahmane; Cadiou, Jean-Charles
2018-03-01
This paper focuses on a combination of a reliability-based approach and an empirical modelling approach for rollover risk assessment of heavy vehicles. A reliability-based warning system is developed to alert the driver to a potential rollover before entering into a bend. The idea behind the proposed methodology is to estimate the rollover risk by the probability that the vehicle load transfer ratio (LTR) exceeds a critical threshold. Accordingly, a so-called reliability index may be used as a measure to assess the vehicle safe functioning. In the reliability method, computing the maximum of LTR requires to predict the vehicle dynamics over the bend which can be in some cases an intractable problem or time-consuming. With the aim of improving the reliability computation time, an empirical model is developed to substitute the vehicle dynamics and rollover models. This is done by using the SVM (Support Vector Machines) algorithm. The preliminary obtained results demonstrate the effectiveness of the proposed approach.
Critical length scale controls adhesive wear mechanisms
Aghababaei, Ramin; Warner, Derek H.; Molinari, Jean-Francois
2016-01-01
The adhesive wear process remains one of the least understood areas of mechanics. While it has long been established that adhesive wear is a direct result of contacting surface asperities, an agreed upon understanding of how contacting asperities lead to wear debris particle has remained elusive. This has restricted adhesive wear prediction to empirical models with limited transferability. Here we show that discrepant observations and predictions of two distinct adhesive wear mechanisms can be reconciled into a unified framework. Using atomistic simulations with model interatomic potentials, we reveal a transition in the asperity wear mechanism when contact junctions fall below a critical length scale. A simple analytic model is formulated to predict the transition in both the simulation results and experiments. This new understanding may help expand use of computer modelling to explore adhesive wear processes and to advance physics-based wear laws without empirical coefficients. PMID:27264270
Heinonen, Johannes P M; Palmer, Stephen C F; Redpath, Steve M; Travis, Justin M J
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions.
Heinonen, Johannes P. M.; Palmer, Stephen C. F.; Redpath, Steve M.; Travis, Justin M. J.
2014-01-01
Individual-based models have gained popularity in ecology, and enable simultaneous incorporation of spatial explicitness and population dynamic processes to understand spatio-temporal patterns of populations. We introduce an individual-based model for understanding and predicting spatial hen harrier (Circus cyaneus) population dynamics in Great Britain. The model uses a landscape with habitat, prey and game management indices. The hen harrier population was initialised according to empirical census estimates for 1988/89 and simulated until 2030, and predictions for 1998, 2004 and 2010 were compared to empirical census estimates for respective years. The model produced a good qualitative match to overall trends between 1989 and 2010. Parameter explorations revealed relatively high elasticity in particular to demographic parameters such as juvenile male mortality. This highlights the need for robust parameter estimates from empirical research. There are clearly challenges for replication of real-world population trends, but this model provides a useful tool for increasing understanding of drivers of hen harrier dynamics and focusing research efforts in order to inform conflict management decisions. PMID:25405860
A model for prediction of STOVL ejector dynamics
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1989-01-01
A semi-empirical control-volume approach to ejector modeling for transient performance prediction is presented. This new approach is motivated by the need for a predictive real-time ejector sub-system simulation for Short Take-Off Verticle Landing (STOVL) integrated flight and propulsion controls design applications. Emphasis is placed on discussion of the approximate characterization of the mixing process central to thrust augmenting ejector operation. The proposed ejector model suggests transient flow predictions are possible with a model based on steady-flow data. A practical test case is presented to illustrate model calibration.
Modeling thermal sensation in a Mediterranean climate—a comparison of linear and ordinal models
NASA Astrophysics Data System (ADS)
Pantavou, Katerina; Lykoudis, Spyridon
2014-08-01
A simple thermo-physiological model of outdoor thermal sensation adjusted with psychological factors is developed aiming to predict thermal sensation in Mediterranean climates. Microclimatic measurements simultaneously with interviews on personal and psychological conditions were carried out in a square, a street canyon and a coastal location of the greater urban area of Athens, Greece. Multiple linear and ordinal regression were applied in order to estimate thermal sensation making allowance for all the recorded parameters or specific, empirically selected, subsets producing so-called extensive and empirical models, respectively. Meteorological, thermo-physiological and overall models - considering psychological factors as well - were developed. Predictions were improved when personal and psychological factors were taken into account as compared to meteorological models. The model based on ordinal regression reproduced extreme values of thermal sensation vote more adequately than the linear regression one, while the empirical model produced satisfactory results in relation to the extensive model. The effects of adaptation and expectation on thermal sensation vote were introduced in the models by means of the exposure time, season and preference related to air temperature and irradiation. The assessment of thermal sensation could be a useful criterion in decision making regarding public health, outdoor spaces planning and tourism.
NASA Astrophysics Data System (ADS)
Wei, Haoyang
A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.
Source Model of Huge Subduction Earthquakes for Strong Ground Motion Prediction
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.
2012-12-01
It is a quite important issue for strong ground motion prediction to construct the source model of huge subduction earthquakes. Irikura and Miyake (2001, 2011) proposed the characterized source model for strong ground motion prediction, which consists of plural strong ground motion generation area (SMGA, Miyake et al., 2003) patches on the source fault. We obtained the SMGA source models for many events using the empirical Green's function method and found the SMGA size has an empirical scaling relationship with seismic moment. Therefore, the SMGA size can be assumed from that empirical relation under giving the seismic moment for anticipated earthquakes. Concerning to the setting of the SMGAs position, the information of the fault segment is useful for inland crustal earthquakes. For the 1995 Kobe earthquake, three SMGA patches are obtained and each Nojima, Suma, and Suwayama segment respectively has one SMGA from the SMGA modeling (e.g. Kamae and Irikura, 1998). For the 2011 Tohoku earthquake, Asano and Iwata (2012) estimated the SMGA source model and obtained four SMGA patches on the source fault. Total SMGA area follows the extension of the empirical scaling relationship between the seismic moment and the SMGA area for subduction plate-boundary earthquakes, and it shows the applicability of the empirical scaling relationship for the SMGA. The positions of two SMGAs are in Miyagi-Oki segment and those other two SMGAs are in Fukushima-Oki and Ibaraki-Oki segments, respectively. Asano and Iwata (2012) also pointed out that all SMGAs are corresponding to the historical source areas of 1930's. Those SMGAs do not overlap the huge slip area in the shallower part of the source fault which estimated by teleseismic data, long-period strong motion data, and/or geodetic data during the 2011 mainshock. This fact shows the huge slip area does not contribute to strong ground motion generation (10-0.1s). The information of the fault segment in the subduction zone, or historical earthquake source area is also applicable for the construction of SMGA settings for strong ground motion prediction for future earthquakes.
Sperry, John S; Venturas, Martin D; Anderegg, William R L; Mencuccini, Maurizio; Mackay, D Scott; Wang, Yujie; Love, David M
2017-06-01
Stomatal regulation presumably evolved to optimize CO 2 for H 2 O exchange in response to changing conditions. If the optimization criterion can be readily measured or calculated, then stomatal responses can be efficiently modelled without recourse to empirical models or underlying mechanism. Previous efforts have been challenged by the lack of a transparent index for the cost of losing water. Yet it is accepted that stomata control water loss to avoid excessive loss of hydraulic conductance from cavitation and soil drying. Proximity to hydraulic failure and desiccation can represent the cost of water loss. If at any given instant, the stomatal aperture adjusts to maximize the instantaneous difference between photosynthetic gain and hydraulic cost, then a model can predict the trajectory of stomatal responses to changes in environment across time. Results of this optimization model are consistent with the widely used Ball-Berry-Leuning empirical model (r 2 > 0.99) across a wide range of vapour pressure deficits and ambient CO 2 concentrations for wet soil. The advantage of the optimization approach is the absence of empirical coefficients, applicability to dry as well as wet soil and prediction of plant hydraulic status along with gas exchange. © 2016 John Wiley & Sons Ltd.
Early prediction of extreme stratospheric polar vortex states based on causal precursors
NASA Astrophysics Data System (ADS)
Kretschmer, Marlene; Runge, Jakob; Coumou, Dim
2017-08-01
Variability in the stratospheric polar vortex (SPV) can influence the tropospheric circulation and thereby winter weather. Early predictions of extreme SPV states are thus important to improve forecasts of winter weather including cold spells. However, dynamical models are usually restricted in lead time because they poorly capture low-frequency processes. Empirical models often suffer from overfitting problems as the relevant physical processes and time lags are often not well understood. Here we introduce a novel empirical prediction method by uniting a response-guided community detection scheme with a causal discovery algorithm. This way, we objectively identify causal precursors of the SPV at subseasonal lead times and find them to be in good agreement with known physical drivers. A linear regression prediction model based on the causal precursors can explain most SPV variability (r2 = 0.58), and our scheme correctly predicts 58% (46%) of extremely weak SPV states for lead times of 1-15 (16-30) days with false-alarm rates of only approximately 5%. Our method can be applied to any variable relevant for (sub)seasonal weather forecasts and could thus help improving long-lead predictions.
A comparison of three radiation models for the calculation of nozzle arcs
NASA Astrophysics Data System (ADS)
Dixon, C. M.; Yan, J. D.; Fang, M. T. C.
2004-12-01
Three radiation models, the semi-empirical model based on net emission coefficients (Zhang et al 1987 J. Phys. D: Appl. Phys. 20 386-79), the five-band P1 model (Eby et al 1998 J. Phys. D: Appl. Phys. 31 1578-88), and the method of partial characteristics (Aubrecht and Lowke 1994 J. Phys. D: Appl.Phys. 27 2066-73, Sevast'yanenko 1979 J. Eng. Phys. 36 138-48), are used to calculate the radiation transfer in an SF6 nozzle arc. The temperature distributions computed by the three models are compared with the measurements of Leseberg and Pietsch (1981 Proc. 4th Int. Symp. on Switching Arc Phenomena (Lodz, Poland) pp 236-40) and Leseberg (1982 PhD Thesis RWTH Aachen, Germany). It has been found that all three models give similar distributions of radiation loss per unit time and volume. For arcs burning in axially dominated flow, such as arcs in nozzle flow, the semi-empirical model and the P1 model give accurate predictions when compared with experimental results. The prediction by the method of partial characteristics is poorest. The computational cost is the lowest for the semi-empirical model.
Predictive and mechanistic multivariate linear regression models for reaction development
Santiago, Celine B.; Guo, Jing-Yao
2018-01-01
Multivariate Linear Regression (MLR) models utilizing computationally-derived and empirically-derived physical organic molecular descriptors are described in this review. Several reports demonstrating the effectiveness of this methodological approach towards reaction optimization and mechanistic interrogation are discussed. A detailed protocol to access quantitative and predictive MLR models is provided as a guide for model development and parameter analysis. PMID:29719711
ERIC Educational Resources Information Center
Moore, Corey L.; Wang, Ningning; Washington, Janique Tynez
2017-01-01
Purpose: This study assessed and demonstrated the efficacy of two select empirical forecast models (i.e., autoregressive integrated moving average [ARIMA] model vs. grey model [GM]) in accurately predicting state vocational rehabilitation agency (SVRA) rehabilitation success rate trends across six different racial and ethnic population cohorts…
NASA Technical Reports Server (NTRS)
Ulaby, F. T. (Principal Investigator); Wilson, E. A.
1984-01-01
A semi-empirical model for microwave backscatter from vegetation was developed and a complete set of canope attenuation measurements as a function of frequency, incidence angle and polarization was acquired. The semi-empirical model was tested on corn and sorghum data over the 8 to 35 GHz range. The model generally provided an excellent fit to the data as measured by the correlation and rms error between observed and predicted data. The model also predicted reasonable values of canopy attenuation. The attenuation data was acquired over the 1.6 to 10.2 GHz range for the linear polarizations at approximately 20 deg and 50 deg incidence angles for wheat and soybeans. An attenuation model is proposed which provides reasonable agreement with the measured data.
An empirical model for inverted-velocity-profile jet noise prediction
NASA Technical Reports Server (NTRS)
Stone, J. R.
1977-01-01
An empirical model for predicting the noise from inverted-velocity-profile coaxial or coannular jets is presented and compared with small-scale static and simulated flight data. The model considered the combined contributions of as many as four uncorrelated constituent sources: the premerged-jet/ambient mixing region, the merged-jet/ambient mixing region, outer-stream shock/turbulence interaction, and inner-stream shock/turbulence interaction. The noise from the merged region occurs at relatively low frequency and is modeled as the contribution of a circular jet at merged conditions and total exhaust area, with the high frequencies attenuated. The noise from the premerged region occurs at high frequency and is modeled as the contribution of an equivalent plug nozzle at outer stream conditions, with the low frequencies attenuated.
Empirical fitness landscapes and the predictability of evolution.
de Visser, J Arjan G M; Krug, Joachim
2014-07-01
The genotype-fitness map (that is, the fitness landscape) is a key determinant of evolution, yet it has mostly been used as a superficial metaphor because we know little about its structure. This is now changing, as real fitness landscapes are being analysed by constructing genotypes with all possible combinations of small sets of mutations observed in phylogenies or in evolution experiments. In turn, these first glimpses of empirical fitness landscapes inspire theoretical analyses of the predictability of evolution. Here, we review these recent empirical and theoretical developments, identify methodological issues and organizing principles, and discuss possibilities to develop more realistic fitness landscape models.
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela
The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
Thomas, Edward V.; Lewis, John. R.; Anderson-Cook, Christine Michaela; ...
2017-07-01
The inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). And while nature dictatesmore » the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. Furthermore, one may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.« less
Model development and applications at the USDA-ARS National Soil Erosion Research Laboratory
USDA-ARS?s Scientific Manuscript database
The United States Department of Agriculture (USDA) has a long history of development of soil erosion prediction technology, initially with empirical equations like the Universal Soil Loss Equation (USLE), and more recently with process-based models such as the Water Erosion Prediction Project (WEPP)...
Predicting Child Abuse Potential: An Empirical Investigation of Two Theoretical Frameworks
ERIC Educational Resources Information Center
Begle, Angela Moreland; Dumas, Jean E.; Hanson, Rochelle F.
2010-01-01
This study investigated two theoretical risk models predicting child maltreatment potential: (a) Belsky's (1993) developmental-ecological model and (b) the cumulative risk model in a sample of 610 caregivers (49% African American, 46% European American; 53% single) with a child between 3 and 6 years old. Results extend the literature by using a…
Martin, Leigh J; Murray, Brad R
2011-05-01
The invasive spread of exotic plants in native vegetation can pose serious threats to native faunal assemblages. This is of particular concern for reptiles and amphibians because they form a significant component of the world's vertebrate fauna, play a pivotal role in ecosystem functioning and are often neglected in biodiversity research. A framework to predict how exotic plant invasion will affect reptile and amphibian assemblages is imperative for conservation, management and the identification of research priorities. Here, we present a new predictive framework that integrates three mechanistic models. These models are based on exotic plant invasion altering: (1) habitat structure; (2) herbivory and predator-prey interactions; (3) the reproductive success of reptile and amphibian species and assemblages. We present a series of testable predictions from these models that arise from the interplay over time among three exotic plant traits (growth form, area of coverage, taxonomic distinctiveness) and six traits of reptiles and amphibians (body size, lifespan, home range size, habitat specialisation, diet, reproductive strategy). A literature review provided robust empirical evidence of exotic plant impacts on reptiles and amphibians from each of the three model mechanisms. Evidence relating to the role of body size and diet was less clear-cut, indicating the need for further research. The literature provided limited empirical support for many of the other model predictions. This was not, however, because findings contradicted our model predictions but because research in this area is sparse. In particular, the small number of studies specifically examining the effects of exotic plants on amphibians highlights the pressing need for quantitative research in this area. There is enormous scope for detailed empirical investigation of interactions between exotic plants and reptile and amphibian species and assemblages. The framework presented here and further testing of predictions will provide a basis for informing and prioritising environmental management and exotic plant control efforts. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
Gravity wave control on ESF day-to-day variability: An empirical approach
NASA Astrophysics Data System (ADS)
Aswathy, R. P.; Manju, G.
2017-06-01
The gravity wave control on the daily variation in nighttime ionization irregularity occurrence is studied using ionosonde data for the period 2002-2007 at magnetic equatorial location Trivandrum. Recent studies during low solar activity period have revealed that the seed perturbations should have the threshold amplitude required to trigger equatorial spread F (ESF), at a particular altitude and that this threshold amplitude undergoes seasonal and solar cycle changes. In the present study, the altitude variation of the threshold seed perturbations is examined for autumnal equinox of different years. Thereafter, a unique empirical model, incorporating the electrodynamical effects and the gravity wave modulation, is developed. Using the model the threshold curve for autumnal equinox season of any year may be delineated if the solar flux index (F10.7) is known. The empirical model is validated using the data for high, moderate, and low solar epochs in 2001, 2004, and 1995, respectively. This model has the potential to be developed further, to forecast ESF incidence, if the base height of ionosphere is in the altitude region where electrodynamics controls the occurrence of ESF. ESF irregularities are harmful for communication and navigation systems, and therefore, research is ongoing globally to predict them. In this context, this study is crucial for evolving a methodology to predict communication as well as navigation outages.
Model for estimating enteric methane emissions from United States dairy and feedlot cattle.
Kebreab, E; Johnson, K A; Archibeque, S L; Pape, D; Wirth, T
2008-10-01
Methane production from enteric fermentation in cattle is one of the major sources of anthropogenic greenhouse gas emission in the United States and worldwide. National estimates of methane emissions rely on mathematical models such as the one recommended by the Intergovernmental Panel for Climate Change (IPCC). Models used for prediction of methane emissions from cattle range from empirical to mechanistic with varying input requirements. Two empirical and 2 mechanistic models (COWPOLL and MOLLY) were evaluated for their prediction ability using individual cattle measurements. Model selection was based on mean square prediction error (MSPE), concordance correlation coefficient, and residuals vs. predicted values analyses. In dairy cattle, COWPOLL had the lowest root MSPE and greatest accuracy and precision of predicting methane emissions (correlation coefficient estimate = 0.75). The model simulated differences in diet more accurately than the other models, and the residuals vs. predicted value analysis showed no mean bias (P = 0.71). In feedlot cattle, MOLLY had the lowest root MSPE with almost all errors from random sources (correlation coefficient estimate = 0.69). The IPCC model also had good agreement with observed values, and no significant mean (P = 0.74) or linear bias (P = 0.11) was detected when residuals were plotted against predicted values. A fixed methane conversion factor (Ym) might be an easier alternative to diet-dependent variable Ym. Based on the results, the 2 mechanistic models were used to simulate methane emissions from representative US diets and were compared with the IPCC model. The average Ym in dairy cows was 5.63% of GE (range 3.78 to 7.43%) compared with 6.5% +/- 1% recommended by IPCC. In feedlot cattle, the average Ym was 3.88% (range 3.36 to 4.56%) compared with 3% +/- 1% recommended by IPCC. Based on our simulations, using IPCC values can result in an overestimate of about 12.5% and underestimate of emissions by about 9.8% for dairy and feedlot cattle, respectively. In addition to providing improved estimates of emissions based on diets, mechanistic models can be used to assess mitigation options such as changing source of carbohydrate or addition of fat to decrease methane, which is not possible with empirical models. We recommend national inventories use diet-specific Ym values predicted by mechanistic models to estimate methane emissions from cattle.
List, Jeffrey; Benedet, Lindino; Hanes, Daniel M.; Ruggiero, Peter
2009-01-01
Predictions of alongshore transport gradients are critical for forecasting shoreline change. At the previous ICCE conference, it was demonstrated that alongshore transport gradients predicted by the empirical CERC equation can differ substantially from predictions made by the hydrodynamics-based model Delft3D in the case of a simulated borrow pit on the shoreface. Here we use the Delft3D momentum balance to examine the reason for this difference. Alongshore advective flow accelerations in our Delft3D simulation are mainly driven by pressure gradients resulting from alongshore variations in wave height and setup, and Delft3D transport gradients are controlled by these flow accelerations. The CERC equation does not take this process into account, and for this reason a second empirical transport term is sometimes added when alongshore gradients in wave height are thought to be significant. However, our test case indicates that this second term does not properly predict alongshore transport gradients.
Sebok, Angelia; Wickens, Christopher D
2017-03-01
The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Ziegler, Sigurd; Pedersen, Mads L; Mowinckel, Athanasia M; Biele, Guido
2016-12-01
Attention deficit hyperactivity disorder (ADHD) is characterized by altered decision-making (DM) and reinforcement learning (RL), for which competing theories propose alternative explanations. Computational modelling contributes to understanding DM and RL by integrating behavioural and neurobiological findings, and could elucidate pathogenic mechanisms behind ADHD. This review of neurobiological theories of ADHD describes predictions for the effect of ADHD on DM and RL as described by the drift-diffusion model of DM (DDM) and a basic RL model. Empirical studies employing these models are also reviewed. While theories often agree on how ADHD should be reflected in model parameters, each theory implies a unique combination of predictions. Empirical studies agree with the theories' assumptions of a lowered DDM drift rate in ADHD, while findings are less conclusive for boundary separation. The few studies employing RL models support a lower choice sensitivity in ADHD, but not an altered learning rate. The discussion outlines research areas for further theoretical refinement in the ADHD field. Copyright © 2016 Elsevier Ltd. All rights reserved.
Elastohydrodynamic film thickness model for heavily loaded contacts
NASA Technical Reports Server (NTRS)
Loewenthal, S. H.; Parker, R. J.; Zaretsky, E. V.
1973-01-01
An empirical elastohydrodynamic (EHD) film thickness formula for predicting the minimum film thickness occurring within heavily loaded contacts (maximum Hertz stresses above 150,000 psi) was developed. The formula was based upon X-ray film thickness measurements made with synthetic paraffinic, fluorocarbon, Type II ester and polyphenyl ether fluids covering a wide range of test conditions. Comparisons were made between predictions from an isothermal EHD theory and the test data. The deduced relationship was found to adequately reflect the high-load dependence exhibited by the measured data. The effects of contact geometry, material and lubricant properties on the form of the empirical model are also discussed.
Semi-empirical airframe noise prediction model
NASA Technical Reports Server (NTRS)
Hersh, A. S.; Putnam, T. W.; Lasagna, P. L.; Burcham, F. W., Jr.
1976-01-01
A semi-empirical maximum overall sound pressure level (OASPL) airframe noise model was derived. The noise radiated from aircraft wings and flaps was modeled by using the trailing-edge diffracted quadrupole sound theory derived by Ffowcs Williams and Hall. The noise radiated from the landing gear was modeled by using the acoustic dipole sound theory derived by Curle. The model was successfully correlated with maximum OASPL flyover noise measurements obtained at the NASA Dryden Flight Research Center for three jet aircraft - the Lockheed JetStar, the Convair 990, and the Boeing 747 aircraft.
A model of rotationally-sampled wind turbulence for predicting fatigue loads in wind turbines
NASA Technical Reports Server (NTRS)
Spera, David A.
1995-01-01
Empirical equations are presented with which to model rotationally-sampled (R-S) turbulence for input to structural-dynamic computer codes and the calculation of wind turbine fatigue loads. These equations are derived from R-S turbulence data which were measured at the vertical-plane array in Clayton, New Mexico. For validation, the equations are applied to the calculation of cyclic flapwise blade loads for the NASA/DOE Mod-2 2.5-MW experimental HAWT's (horizontal-axis wind turbines), and the results compared to measured cyclic loads. Good correlation is achieved, indicating that the R-S turbulence model developed in this study contains the characteristics of the wind which produce many of the fatigue loads sustained by wind turbines. Empirical factors are included which permit the prediction of load levels at specified percentiles of occurrence, which is required for the generation of fatigue load spectra and the prediction of the fatigue lifetime of structures.
Electrochemical carbon dioxide concentrator: Math model
NASA Technical Reports Server (NTRS)
Marshall, R. D.; Schubert, F. H.; Carlson, J. N.
1973-01-01
A steady state computer simulation model of an Electrochemical Depolarized Carbon Dioxide Concentrator (EDC) has been developed. The mathematical model combines EDC heat and mass balance equations with empirical correlations derived from experimental data to describe EDC performance as a function of the operating parameters involved. The model is capable of accurately predicting performance over EDC operating ranges. Model simulation results agree with the experimental data obtained over the prediction range.
Köster, Andreas; Spura, Thomas; Rutkai, Gábor; Kessler, Jan; Wiebeler, Hendrik; Vrabec, Jadran; Kühne, Thomas D
2016-07-15
The accuracy of water models derived from ab initio molecular dynamics simulations by means on an improved force-matching scheme is assessed for various thermodynamic, transport, and structural properties. It is found that although the resulting force-matched water models are typically less accurate than fully empirical force fields in predicting thermodynamic properties, they are nevertheless much more accurate than generally appreciated in reproducing the structure of liquid water and in fact superseding most of the commonly used empirical water models. This development demonstrates the feasibility to routinely parametrize computationally efficient yet predictive potential energy functions based on accurate ab initio molecular dynamics simulations for a large variety of different systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Numerical and Qualitative Contrasts of Two Statistical Models ...
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-
Analysis methods for Kevlar shield response to rotor fragments
NASA Technical Reports Server (NTRS)
Gerstle, J. H.
1977-01-01
Several empirical and analytical approaches to rotor burst shield sizing are compared and principal differences in metal and fabric dynamic behavior are discussed. The application of transient structural response computer programs to predict Kevlar containment limits is described. For preliminary shield sizing, present analytical methods are useful if insufficient test data for empirical modeling are available. To provide other information useful for engineering design, analytical methods require further developments in material characterization, failure criteria, loads definition, and post-impact fragment trajectory prediction.
NASA Astrophysics Data System (ADS)
Norros, Veera; Laine, Marko; Lignell, Risto; Thingstad, Frede
2017-10-01
Methods for extracting empirically and theoretically sound parameter values are urgently needed in aquatic ecosystem modelling to describe key flows and their variation in the system. Here, we compare three Bayesian formulations for mechanistic model parameterization that differ in their assumptions about the variation in parameter values between various datasets: 1) global analysis - no variation, 2) separate analysis - independent variation and 3) hierarchical analysis - variation arising from a shared distribution defined by hyperparameters. We tested these methods, using computer-generated and empirical data, coupled with simplified and reasonably realistic plankton food web models, respectively. While all methods were adequate, the simulated example demonstrated that a well-designed hierarchical analysis can result in the most accurate and precise parameter estimates and predictions, due to its ability to combine information across datasets. However, our results also highlighted sensitivity to hyperparameter prior distributions as an important caveat of hierarchical analysis. In the more complex empirical example, hierarchical analysis was able to combine precise identification of parameter values with reasonably good predictive performance, although the ranking of the methods was less straightforward. We conclude that hierarchical Bayesian analysis is a promising tool for identifying key ecosystem-functioning parameters and their variation from empirical datasets.
Empirical Corrections to Nutation Amplitudes and Precession Computed from a Global VLBI Solution
NASA Astrophysics Data System (ADS)
Schuh, H.; Ferrandiz, J. M.; Belda-Palazón, S.; Heinkelmann, R.; Karbon, M.; Nilsson, T.
2017-12-01
The IAU2000A nutation and IAU2006 precession models were adopted to provide accurate estimations and predictions of the Celestial Intermediate Pole (CIP). However, they are not fully accurate and VLBI (Very Long Baseline Interferometry) observations show that the CIP deviates from the position resulting from the application of the IAU2006/2000A model. Currently, those deviations or offsets of the CIP (Celestial Pole Offsets - CPO), can only be obtained by the VLBI technique. The accuracy of the order of 0.1 milliseconds of arc (mas) allows to compare the observed nutation with theoretical prediction model for a rigid Earth and constrain geophysical parameters describing the Earth's interior. In this study, we empirically evaluate the consistency, systematics and deviations of the IAU 2006/2000A precession-nutation model using several CPO time series derived from the global analysis of VLBI sessions. The final objective is the reassessment of the precession offset and rate, and the amplitudes of the principal terms of nutation, trying to empirically improve the conventional values derived from the precession/nutation theories. The statistical analysis of the residuals after re-fitting the main nutation terms demonstrates that our empirical corrections attain an error reduction by almost 15 micro arc seconds.
Are recent empirical directivity models sufficient in capturing near-fault directivity effect?
NASA Astrophysics Data System (ADS)
Chen, Yen-Shin; Cotton, Fabrice; Pagani, Marco; Weatherill, Graeme; Reshi, Owais; Mai, Martin
2017-04-01
It has been widely observed that the ground motion variability in the near field can be significantly higher than that commonly reported in published GMPEs, and this has been suggested to be a consequence of directivity. To capture the spatial variation in ground motion amplitude and frequency caused by the near-fault directivity effect, several models for engineering applications have been developed using empirical or, more recently, the combination of empirical and simulation data. Many research works have indicated that the large velocity pulses mainly observed in the near-field are primarily related to slip heterogeneity (i.e., asperities), suggesting that the slip heterogeneity is a more dominant controlling factor than the rupture velocity or source rise time function. The first generation of broadband directivity models for application in ground motion prediction do not account for heterogeneity of slip and rupture speed. With the increased availability of strong motion recordings (e.g., NGA-West 2 database) in the near-fault region, the directivity models moved from broadband to narrowband models to include the magnitude dependence of the period of the rupture directivity pulses, wherein the pulses are believed to be closely related to the heterogeneity of slip distribution. After decades of directivity models development, does the latest generation of models - i.e. the one including narrowband directivity models - better capture the near-fault directivity effects, particularly in presence of strong slip heterogeneity? To address this question, a set of simulated motions for an earthquake rupture scenario, with various kinematic slip models and hypocenter locations, are used as a basis for a comparison with the directivity models proposed by the NGA-West 2 project for application with ground motion prediction equations incorporating a narrowband directivity model. The aim of this research is to gain better insights on the accuracy of narrowband directivity models under conditions commonly encountered in the real world. Our preliminary result shows that empirical models including directivity factors better predict physics based ground-motion and their spatial variability than classical empirical models. However, the results clearly indicate that it is still a challenge for the directivity models to capture the strong directivity effect if a high level of slip heterogeneity is involved during the source rupture process.
NASA Technical Reports Server (NTRS)
Richey, Edward, III
1995-01-01
This research aims to develop the methods and understanding needed to incorporate time and loading variable dependent environmental effects on fatigue crack propagation (FCP) into computerized fatigue life prediction codes such as NASA FLAGRO (NASGRO). In particular, the effect of loading frequency on FCP rates in alpha + beta titanium alloys exposed to an aqueous chloride solution is investigated. The approach couples empirical modeling of environmental FCP with corrosion fatigue experiments. Three different computer models have been developed and incorporated in the DOS executable program. UVAFAS. A multiple power law model is available, and can fit a set of fatigue data to a multiple power law equation. A model has also been developed which implements the Wei and Landes linear superposition model, as well as an interpolative model which can be utilized to interpolate trends in fatigue behavior based on changes in loading characteristics (stress ratio, frequency, and hold times).
A model to predict stream water temperature across the conterminous USA
Catalina Segura; Peter Caldwell; Ge Sun; Steve McNulty; Yang Zhang
2014-01-01
Stream water temperature (ts) is a critical water quality parameter for aquatic ecosystems. However, ts records are sparse or nonexistent in many river systems. In this work, we present an empirical model to predict ts at the site scale across the USA. The model, derived using data from 171 reference sites selected from the Geospatial Attributes of Gages for Evaluating...
Evaluating high temporal and spatial resolution vegetation index for crop yield prediction
USDA-ARS?s Scientific Manuscript database
Remote sensing data have been widely used in estimating crop yield. Remote sensing derived parameters such as Vegetation Index (VI) were used either directly in building empirical models or by assimilating with crop growth models to predict crop yield. The abilities of remote sensing VI in crop yiel...
Hyland, Philip; Shevlin, Mark; Adamson, Gary; Boduszek, Daniel
2014-01-01
This study directly tests a central prediction of rational emotive behaviour therapy (REBT) that has received little empirical attention regarding the core and intermediate beliefs in the development of posttraumatic stress symptoms. A theoretically consistent REBT model of posttraumatic stress disorder (PTSD) was examined using structural equation modelling techniques among a sample of 313 trauma-exposed military and law enforcement personnel. The REBT model of PTSD provided a good fit of the data, χ(2) = 599.173, df = 356, p < .001; standardized root mean square residual = .05 (confidence interval = .04-.05); standardized root mean square residual = .04; comparative fit index = .95; Tucker Lewis index = .95. Results demonstrated that demandingness beliefs indirectly affected the various symptom groups of PTSD through a set of secondary irrational beliefs that include catastrophizing, low frustration tolerance, and depreciation beliefs. Results were consistent with the predictions of REBT theory and provides strong empirical support that the cognitive variables described by REBT theory are critical cognitive constructs in the prediction of PTSD symptomology. © 2013 Wiley Periodicals, Inc.
Granular activated carbon adsorption of MIB in the presence of dissolved organic matter.
Summers, R Scott; Kim, Soo Myung; Shimabuku, Kyle; Chae, Seon-Ha; Corwin, Christopher J
2013-06-15
Based on the results of over twenty laboratory granular activated carbon (GAC) column runs, models were developed and utilized for the prediction of 2-methylisoborneol (MIB) breakthrough behavior at parts per trillion levels and verified with pilot-scale data. The influent MIB concentration was found not to impact the concentration normalized breakthrough. Increasing influent background dissolved organic matter (DOM) concentration was found to systematically decrease the GAC adsorption capacity for MIB. A series of empirical models were developed that related the throughput in bed volumes for a range of MIB breakthrough targets to the influent DOM concentration. The proportional diffusivity (PD) designed rapid small-scale column test (RSSCT) could be directly used to scale-up MIB breakthrough performance below 15% breakthrough. The empirical model to predict the throughput to 50% breakthrough based on the influent DOM concentration served as input to the pore diffusion model (PDM) and well-predicted the MIB breakthrough performance below a 50% breakthrough. The PDM predictions of throughput to 10% breakthrough well simulated the PD-RSSCT and pilot-scale 10% MIB breakthrough. Copyright © 2013 Elsevier Ltd. All rights reserved.
Mathematics for understanding disease.
Bies, R R; Gastonguay, M R; Schwartz, S L
2008-06-01
The application of mathematical models to reflect the organization and activity of biological systems can be viewed as a continuum of purpose. The far left of the continuum is solely the prediction of biological parameter values, wherein an understanding of the underlying biological processes is irrelevant to the purpose. At the far right of the continuum are mathematical models, the purposes of which are a precise understanding of those biological processes. No models in present use fall at either end of the continuum. Without question, however, the emphasis in regards to purpose has been on prediction, e.g., clinical trial simulation and empirical disease progression modeling. Clearly the model that ultimately incorporates a universal understanding of biological organization will also precisely predict biological events, giving the continuum the logical form of a tautology. Currently that goal lies at an immeasurable distance. Nonetheless, the motive here is to urge movement in the direction of that goal. The distance traveled toward understanding naturally depends upon the nature of the scientific question posed with respect to comprehending and/or predicting a particular disease process. A move toward mathematical models implies a move away from static empirical modeling and toward models that focus on systems biology, wherein modeling entails the systematic study of the complex pattern of organization inherent in biological systems.
Gas Generator Feedline Orifice Sizing Methodology: Effects of Unsteadiness and Non-Axisymmetric Flow
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; West, Jeffrey S.
2011-01-01
Engine LH2 and LO2 gas generator feed assemblies were modeled with computational fluid dynamics (CFD) methods at 100% rated power level, using on-center square- and round-edge orifices. The purpose of the orifices is to regulate the flow of fuel and oxidizer to the gas generator, enabling optimal power supply to the turbine and pump assemblies. The unsteady Reynolds-Averaged Navier-Stokes equations were solved on unstructured grids at second-order spatial and temporal accuracy. The LO2 model was validated against published experimental data and semi-empirical relationships for thin-plate orifices over a range of Reynolds numbers. Predictions for the LO2 square- and round-edge orifices precisely match experiment and semi-empirical formulas, despite complex feedline geometry whereby a portion of the flow from the engine main feedlines travels at a right-angle through a smaller-diameter pipe containing the orifice. Predictions for LH2 square- and round-edge orifice designs match experiment and semi-empirical formulas to varying degrees depending on the semi-empirical formula being evaluated. LO2 mass flow rate through the square-edge orifice is predicted to be 25 percent less than the flow rate budgeted in the original engine balance, which was subsequently modified. LH2 mass flow rate through the square-edge orifice is predicted to be 5 percent greater than the flow rate budgeted in the engine balance. Since CFD predictions for LO2 and LH2 square-edge orifice pressure loss coefficients, K, both agree with published data, the equation for K has been used to define a procedure for orifice sizing.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters
NASA Astrophysics Data System (ADS)
Selyutina, N. S.; Petrov, Yu. V.
2018-02-01
The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.
Pearce, Marcus T
2018-05-11
Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception-expectation, emotion, memory, similarity, segmentation, and meter-can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here. © 2018 The Authors. Annals of the New York Academy of Sciences published by Wiley Periodicals, Inc. on behalf of New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Escobar-Palafox, Gustavo; Gault, Rosemary; Ridgway, Keith
2011-12-01
Shaped Metal Deposition (SMD) is an additive manufacturing process which creates parts layer by layer by weld depositions. In this work, empirical models that predict part geometry (wall thickness and outer diameter) and some metallurgical aspects (i.e. surface texture, portion of finer Widmanstätten microstructure) for the SMD process were developed. The models are based on an orthogonal fractional factorial design of experiments with four factors at two levels. The factors considered were energy level (a relationship between heat source power and the rate of raw material input.), step size, programmed diameter and travel speed. The models were validated using previous builds; the prediction error for part geometry was under 11%. Several relationships between the factors and responses were identified. Current had a significant effect on wall thickness; thickness increases with increasing current. Programmed diameter had a significant effect on percentage of shrinkage; this decreased with increasing component size. Surface finish decreased with decreasing step size and current.
Jones, Rachael M; Simmons, Catherine; Boelter, Fred
2011-06-01
Drywall finishing is a dusty construction activity. We describe a mathematical model that predicts the time-weighted average concentration of respirable and total dusts in the personal breathing zone of the sander, and in the area surrounding joint compound sanding activities. The model represents spatial variation in dust concentrations using two-zones, and temporal variation using an exponential function. Interzone flux and the relationships between respirable and total dusts are described using empirical factors. For model evaluation, we measured dust concentrations in two field studies, including three workers from a commercial contracting crew, and one unskilled worker. Data from the field studies confirm that the model assumptions and parameterization are reasonable and thus validate the modeling approach. Predicted dust C(twa) were in concordance with measured values for the contracting crew, but under estimated measured values for the unskilled worker. Further characterization of skill-related exposure factors is indicated.
A Four-Stage Hybrid Model for Hydrological Time Series Forecasting
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782
A four-stage hybrid model for hydrological time series forecasting.
Di, Chongli; Yang, Xiaohua; Wang, Xiaochao
2014-01-01
Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.
A computational approach to compare regression modelling strategies in prediction research.
Pajouheshnia, Romin; Pestman, Wiebe R; Teerenstra, Steven; Groenwold, Rolf H H
2016-08-25
It is often unclear which approach to fit, assess and adjust a model will yield the most accurate prediction model. We present an extension of an approach for comparing modelling strategies in linear regression to the setting of logistic regression and demonstrate its application in clinical prediction research. A framework for comparing logistic regression modelling strategies by their likelihoods was formulated using a wrapper approach. Five different strategies for modelling, including simple shrinkage methods, were compared in four empirical data sets to illustrate the concept of a priori strategy comparison. Simulations were performed in both randomly generated data and empirical data to investigate the influence of data characteristics on strategy performance. We applied the comparison framework in a case study setting. Optimal strategies were selected based on the results of a priori comparisons in a clinical data set and the performance of models built according to each strategy was assessed using the Brier score and calibration plots. The performance of modelling strategies was highly dependent on the characteristics of the development data in both linear and logistic regression settings. A priori comparisons in four empirical data sets found that no strategy consistently outperformed the others. The percentage of times that a model adjustment strategy outperformed a logistic model ranged from 3.9 to 94.9 %, depending on the strategy and data set. However, in our case study setting the a priori selection of optimal methods did not result in detectable improvement in model performance when assessed in an external data set. The performance of prediction modelling strategies is a data-dependent process and can be highly variable between data sets within the same clinical domain. A priori strategy comparison can be used to determine an optimal logistic regression modelling strategy for a given data set before selecting a final modelling approach.
NASA Astrophysics Data System (ADS)
Iwata, T.; Asano, K.; Sekiguchi, H.
2011-12-01
We propose a prototype of the procedure to construct source models for strong motion prediction during intraslab earthquakes based on the characterized source model (Irikura and Miyake, 2011). The key is the characterized source model which is based on the empirical scaling relationships for intraslab earthquakes and involve the correspondence between the SMGA (strong motion generation area, Miyake et al., 2003) and the asperity (large slip area). Iwata and Asano (2011) obtained the empirical relationships of the rupture area (S) and the total asperity area (Sa) to the seismic moment (Mo) as follows, with assuming power of 2/3 dependency of S and Sa on M0, S (km**2) = 6.57×10**(-11)×Mo**(2/3) (Nm) (1) Sa (km**2) = 1.04 ×10**(-11)×Mo**(2/3) (Nm) (2). Iwata and Asano (2011) also pointed out that the position and the size of SMGA approximately corresponds to the asperity area for several intraslab events. Based on the empirical relationships, we gave a procedure for constructing source models of intraslab earthquakes for strong motion prediction. [1] Give the seismic moment, Mo. [2] Obtain the total rupture area and the total asperity area according to the empirical scaling relationships between S, Sa, and Mo given by Iwata and Asano (2011). [3] Square rupture area and asperities are assumed. [4] The source mechanism is assumed to be the same as that of small events in the source region. [5] Plural scenarios including variety of the number of asperities and rupture starting points are prepared. We apply this procedure by simulating strong ground motions for several observed events for confirming the methodology.
DOT National Transportation Integrated Search
2007-08-01
The objective of this research study was to develop performance characteristics or variables (e.g., ride quality, rutting, : fatigue cracking, transverse cracking) of flexible pavements in Montana, and to use these characteristics in the : implementa...
Evaluating the intersection of a regional wildlife connectivity network with highways
Samuel A. Cushman; Jesse S. Lewis; Erin L. Landguth
2013-01-01
Reliable predictions of regional-scale population connectivity are needed to prioritize conservation actions. However, there have been few examples of regional connectivity models that are empirically derived and validated. The central goals of this paper were to (1) evaluate the effectiveness of factorial least cost path corridor mapping on an empirical...
NASA Astrophysics Data System (ADS)
Anderson, O. Roger
The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.
Fire risk in San Diego County, California: A weighted Bayesian model approach
Kolden, Crystal A.; Weigel, Timothy J.
2007-01-01
Fire risk models are widely utilized to mitigate wildfire hazards, but models are often based on expert opinions of less understood fire-ignition and spread processes. In this study, we used an empirically derived weights-of-evidence model to assess what factors produce fire ignitions east of San Diego, California. We created and validated a dynamic model of fire-ignition risk based on land characteristics and existing fire-ignition history data, and predicted ignition risk for a future urbanization scenario. We then combined our empirical ignition-risk model with a fuzzy fire behavior-risk model developed by wildfire experts to create a hybrid model of overall fire risk. We found that roads influence fire ignitions and that future growth will increase risk in new rural development areas. We conclude that empirically derived risk models and hybrid models offer an alternative method to assess current and future fire risk based on management actions.
Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei
2018-01-01
Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijiang stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting. PMID:29883381
Zhang, Xike; Zhang, Qiuwen; Zhang, Gui; Nie, Zhiping; Gui, Zifan; Que, Huafei
2018-05-21
Daily land surface temperature (LST) forecasting is of great significance for application in climate-related, agricultural, eco-environmental, or industrial studies. Hybrid data-driven prediction models using Ensemble Empirical Mode Composition (EEMD) coupled with Machine Learning (ML) algorithms are useful for achieving these purposes because they can reduce the difficulty of modeling, require less history data, are easy to develop, and are less complex than physical models. In this article, a computationally simple, less data-intensive, fast and efficient novel hybrid data-driven model called the EEMD Long Short-Term Memory (LSTM) neural network, namely EEMD-LSTM, is proposed to reduce the difficulty of modeling and to improve prediction accuracy. The daily LST data series from the Mapoling and Zhijaing stations in the Dongting Lake basin, central south China, from 1 January 2014 to 31 December 2016 is used as a case study. The EEMD is firstly employed to decompose the original daily LST data series into many Intrinsic Mode Functions (IMFs) and a single residue item. Then, the Partial Autocorrelation Function (PACF) is used to obtain the number of input data sample points for LSTM models. Next, the LSTM models are constructed to predict the decompositions. All the predicted results of the decompositions are aggregated as the final daily LST. Finally, the prediction performance of the hybrid EEMD-LSTM model is assessed in terms of the Mean Square Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), Root Mean Square Error (RMSE), Pearson Correlation Coefficient (CC) and Nash-Sutcliffe Coefficient of Efficiency (NSCE). To validate the hybrid data-driven model, the hybrid EEMD-LSTM model is compared with the Recurrent Neural Network (RNN), LSTM and Empirical Mode Decomposition (EMD) coupled with RNN, EMD-LSTM and EEMD-RNN models, and their comparison results demonstrate that the hybrid EEMD-LSTM model performs better than the other five models. The scatterplots of the predicted results of the six models versus the original daily LST data series show that the hybrid EEMD-LSTM model is superior to the other five models. It is concluded that the proposed hybrid EEMD-LSTM model in this study is a suitable tool for temperature forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions.
Fox, Naomi J; Marion, Glenn; Davidson, Ross S; White, Piran C L; Hutchings, Michael R
2012-03-06
Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed.
Xu, Jia; Zhang, Nan; Han, Bin; You, Yan; Zhou, Jian; Zhang, Jiefeng; Niu, Can; Liu, Yating; He, Fei; Ding, Xiao; Bai, Zhipeng
2016-12-01
Using central site measurement data to predict personal exposure to particulate matter (PM) is challenging, because people spend most of their time indoors and ambient contribution to personal exposure is subject to infiltration conditions affected by many factors. Efforts in assessing and predicting exposure on the basis of associated indoor/outdoor and central site monitoring were limited in China. This study collected daily personal exposure, residential indoor/outdoor and community central site PM filter samples in an elderly community during the non-heating and heating periods in 2009 in Tianjin, China. Based on the chemical analysis results of particulate species, mass concentrations of the particulate compounds were estimated and used to reconstruct the PM mass for mass balance analysis. The infiltration factors (F inf ) of particulate compounds were estimated using both robust regression and mixed effect regression methods, and further estimated the exposure factor (F pex ) according to participants' time-activity patterns. Then an empirical exposure model was developed to predict personal exposure to PM and particulate compounds as the sum of ambient and non-ambient contributions. Results showed that PM mass observed during the heating period could be well represented through chemical mass reconstruction, because unidentified mass was minimal. Excluding the high observations (>300μg/m 3 ), this empirical exposure model performed well for PM and elemental carbon (EC) that had few indoor sources. These results support the use of F pex as an indicator for ambient contribution predictions, and the use of empirical non-ambient contribution to assess exposure to particulate compounds. Copyright © 2016 Elsevier B.V. All rights reserved.
Styron, J D; Cooper, G W; Ruiz, C L; Hahn, K D; Chandler, G A; Nelson, A J; Torres, J A; McWatters, B R; Carpenter, Ken; Bonura, M A
2014-11-01
A methodology for obtaining empirical curves relating absolute measured scintillation light output to beta energy deposited is presented. Output signals were measured from thin plastic scintillator using NIST traceable beta and gamma sources and MCNP5 was used to model the energy deposition from each source. Combining the experimental and calculated results gives the desired empirical relationships. To validate, the sensitivity of a beryllium/scintillator-layer neutron activation detector was predicted and then exposed to a known neutron fluence from a Deuterium-Deuterium fusion plasma (DD). The predicted and the measured sensitivity were in statistical agreement.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
NASA Astrophysics Data System (ADS)
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Prediction of Agglomeration, Fouling, and Corrosion Tendency of Fuels in CFB Co-Combustion
NASA Astrophysics Data System (ADS)
Barišć, Vesna; Zabetta, Edgardo Coda; Sarkki, Juha
Prediction of agglomeration, fouling, and corrosion tendency of fuels is essential to the design of any CFB boiler. During the years, tools have been successfully developed at Foster Wheeler to help with such predictions for the most commercial fuels. However, changes in fuel market and the ever-growing demand for co-combustion capabilities pose a continuous need for development. This paper presents results from recently upgraded models used at Foster Wheeler to predict agglomeration, fouling, and corrosion tendency of a variety of fuels and mixtures. The models, subject of this paper, are semi-empirical computer tools that combine the theoretical basics of agglomeration/fouling/corrosion phenomena with empirical correlations. Correlations are derived from Foster Wheeler's experience in fluidized beds, including nearly 10,000 fuel samples and over 1,000 tests in about 150 CFB units. In these models, fuels are evaluated based on their classification, their chemical and physical properties by standard analyses (proximate, ultimate, fuel ash composition, etc.;.) alongside with Foster Wheeler own characterization methods. Mixtures are then evaluated taking into account the component fuels. This paper presents the predictive capabilities of the agglomeration/fouling/corrosion probability models for selected fuels and mixtures fired in full-scale. The selected fuels include coals and different types of biomass. The models are capable to predict the behavior of most fuels and mixtures, but also offer possibilities for further improvements.
Empirical Prediction of Aircraft Landing Gear Noise
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Guo, Yue-Ping
2005-01-01
This report documents a semi-empirical/semi-analytical method for landing gear noise prediction. The method is based on scaling laws of the theory of aerodynamic noise generation and correlation of these scaling laws with current available test data. The former gives the method a sound theoretical foundation and the latter quantitatively determines the relations between the parameters of the landing gear assembly and the far field noise, enabling practical predictions of aircraft landing gear noise, both for parametric trends and for absolute noise levels. The prediction model is validated by wind tunnel test data for an isolated Boeing 737 landing gear and by flight data for the Boeing 777 airplane. In both cases, the predictions agree well with data, both in parametric trends and in absolute noise levels.
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Liang, Cui
2007-01-01
The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.
Baaquie, Belal E; Liang, Cui
2007-01-01
The quantum finance pricing formulas for coupon bond options and swaptions derived by Baaquie [Phys. Rev. E 75, 016703 (2006)] are reviewed. We empirically study the swaption market and propose an efficient computational procedure for analyzing the data. Empirical results of the swaption price, volatility, and swaption correlation are compared with the predictions of quantum finance. The quantum finance model generates the market swaption price to over 90% accuracy.
Development and Assessment of a New Empirical Model for Predicting Full Creep Curves
Gray, Veronica; Whittaker, Mark
2015-01-01
This paper details the development and assessment of a new empirical creep model that belongs to the limited ranks of models reproducing full creep curves. The important features of the model are that it is fully standardised and is universally applicable. By standardising, the user no longer chooses functions but rather fits one set of constants only. Testing it on 7 contrasting materials, reproducing 181 creep curves we demonstrate its universality. New model and Theta Projection curves are compared to one another using an assessment tool developed within this paper. PMID:28793458
NASA Astrophysics Data System (ADS)
Sergeeva, Tatiana F.; Moshkova, Albina N.; Erlykina, Elena I.; Khvatova, Elena M.
2016-04-01
Creatine kinase is a key enzyme of energy metabolism in the brain. There are known cytoplasmic and mitochondrial creatine kinase isoenzymes. Mitochondrial creatine kinase exists as a mixture of two oligomeric forms - dimer and octamer. The aim of investigation was to study catalytic properties of cytoplasmic and mitochondrial creatine kinase and using of the method of empirical dependences for the possible prediction of the activity of these enzymes in cerebral ischemia. Ischemia was revealed to be accompanied with the changes of the activity of creatine kinase isoenzymes and oligomeric state of mitochondrial isoform. There were made the models of multiple regression that permit to study the activity of creatine kinase system in cerebral ischemia using a calculating method. Therefore, the mathematical method of empirical dependences can be applied for estimation and prediction of the functional state of the brain by the activity of creatine kinase isoenzymes in cerebral ischemia.
Predictive Modeling of Risk Associated with Temperature Extremes over Continental US
NASA Astrophysics Data System (ADS)
Kravtsov, S.; Roebber, P.; Brazauskas, V.
2016-12-01
We build an extremely statistically accurate, essentially bias-free empirical emulator of atmospheric surface temperature and apply it for meteorological risk assessment over the domain of continental US. The resulting prediction scheme achieves an order-of-magnitude or larger gain of numerical efficiency compared with the schemes based on high-resolution dynamical atmospheric models, leading to unprecedented accuracy of the estimated risk distributions. The empirical model construction methodology is based on our earlier work, but is further modified to account for the influence of large-scale, global climate change on regional US weather and climate. The resulting estimates of the time-dependent, spatially extended probability of temperature extremes over the simulation period can be used as a risk management tool by insurance companies and regulatory governmental agencies.
NASA Technical Reports Server (NTRS)
Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)
2000-01-01
This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.
DOT National Transportation Integrated Search
2017-02-08
The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...
High-Throughput Physiologically Based Toxicokinetic Models for ToxCast Chemicals
Physiologically based toxicokinetic (PBTK) models aid in predicting exposure doses needed to create tissue concentrations equivalent to those identified as bioactive by ToxCast. We have implemented four empirical and physiologically-based toxicokinetic (TK) models within a new R ...
Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting
NASA Astrophysics Data System (ADS)
Zhang, Ningning; Lin, Aijing; Shang, Pengjian
2017-07-01
In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.
Predicting Pilot Error in Nextgen: Pilot Performance Modeling and Validation Efforts
NASA Technical Reports Server (NTRS)
Wickens, Christopher; Sebok, Angelia; Gore, Brian; Hooey, Becky
2012-01-01
We review 25 articles presenting 5 general classes of computational models to predict pilot error. This more targeted review is placed within the context of the broader review of computational models of pilot cognition and performance, including such aspects as models of situation awareness or pilot-automation interaction. Particular emphasis is placed on the degree of validation of such models against empirical pilot data, and the relevance of the modeling and validation efforts to Next Gen technology and procedures.
Predictive Model for Jet Engine Test Cell Opacity
1981-09-30
precipitators or venturi scrubbers to treat the exhaust emissions. These predictions indicate that control devices larger than the test cells would have...made to see under what conditions electrostatic precipitators or venturi scrubbers might satisfy opacity regu- lations. 3 SECTION I I SMOKE NUMBER j...high energy venturi scrubber . As with the ESP model, this also required an empirical factor (f) to make the model agree approximately with actual data
Predictive performance models and multiple task performance
NASA Technical Reports Server (NTRS)
Wickens, Christopher D.; Larish, Inge; Contorer, Aaron
1989-01-01
Five models that predict how performance of multiple tasks will interact in complex task scenarios are discussed. The models are shown in terms of the assumptions they make about human operator divided attention. The different assumptions about attention are then empirically validated in a multitask helicopter flight simulation. It is concluded from this simulation that the most important assumption relates to the coding of demand level of different component tasks.
A Comparison of Combustor-Noise Models
NASA Technical Reports Server (NTRS)
Hultgren, Lennart S.
2012-01-01
The present status of combustor-noise prediction in the NASA Aircraft Noise Prediction Program (ANOPP)1 for current-generation (N) turbofan engines is summarized. Several semi-empirical models for turbofan combustor noise are discussed, including best methods for near-term updates to ANOPP. An alternate turbine-transmission factor2 will appear as a user selectable option in the combustor-noise module GECOR in the next release. The three-spectrum model proposed by Stone et al.3 for GE turbofan-engine combustor noise is discussed and compared with ANOPP predictions for several relevant cases. Based on the results presented herein and in their report,3 it is recommended that the application of this fully empirical combustor-noise prediction method be limited to situations involving only General-Electric turbofan engines. Long-term needs and challenges for the N+1 through N+3 time frame are discussed. Because the impact of other propulsion-noise sources continues to be reduced due to turbofan design trends, advances in noise-mitigation techniques, and expected aircraft configuration changes, the relative importance of core noise is expected to greatly increase in the future. The noise-source structure in the combustor, including the indirect one, and the effects of the propagation path through the engine and exhaust nozzle need to be better understood. In particular, the acoustic consequences of the expected trends toward smaller, highly efficient gas-generator cores and low-emission fuel-flexible combustors need to be fully investigated since future designs are quite likely to fall outside of the parameter space of existing (semi-empirical) prediction tools.
A Hybrid Ground-Motion Prediction Equation for Earthquakes in Western Alberta
NASA Astrophysics Data System (ADS)
Spriggs, N.; Yenier, E.; Law, A.; Moores, A. O.
2015-12-01
Estimation of ground-motion amplitudes that may be produced by future earthquakes constitutes the foundation of seismic hazard assessment and earthquake-resistant structural design. This is typically done by using a prediction equation that quantifies amplitudes as a function of key seismological variables such as magnitude, distance and site condition. In this study, we develop a hybrid empirical prediction equation for earthquakes in western Alberta, where evaluation of seismic hazard associated with induced seismicity is of particular interest. We use peak ground motions and response spectra from recorded seismic events to model the regional source and attenuation attributes. The available empirical data is limited in the magnitude range of engineering interest (M>4). Therefore, we combine empirical data with a simulation-based model in order to obtain seismologically informed predictions for moderate-to-large magnitude events. The methodology is two-fold. First, we investigate the shape of geometrical spreading in Alberta. We supplement the seismic data with ground motions obtained from mining/quarry blasts, in order to gain insights into the regional attenuation over a wide distance range. A comparison of ground-motion amplitudes for earthquakes and mining/quarry blasts show that both event types decay at similar rates with distance and demonstrate a significant Moho-bounce effect. In the second stage, we calibrate the source and attenuation parameters of a simulation-based prediction equation to match the available amplitude data from seismic events. We model the geometrical spreading using a trilinear function with attenuation rates obtained from the first stage, and calculate coefficients of anelastic attenuation and site amplification via regression analysis. This provides a hybrid ground-motion prediction equation that is calibrated for observed motions in western Alberta and is applicable to moderate-to-large magnitude events.
Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer
2006-01-01
During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.
Climate change and the eco-hydrology of fire: Will area burned increase in a warming western USA?
Donald McKenzie; Jeremy S. Littell
2017-01-01
Wildfire area is predicted to increase with global warming. Empirical statistical models and process-based simulations agree almost universally. The key relationship for this unanimity, observed at multiple spatial and temporal scales, is between drought and fire. Predictive models often focus on ecosystems in which this relationship appears to be particularly strong,...
Estimation of the viscosities of liquid binary alloys
NASA Astrophysics Data System (ADS)
Wu, Min; Su, Xiang-Yu
2018-01-01
As one of the most important physical and chemical properties, viscosity plays a critical role in physics and materials as a key parameter to quantitatively understanding the fluid transport process and reaction kinetics in metallurgical process design. Experimental and theoretical studies on liquid metals are problematic. Today, there are many empirical and semi-empirical models available with which to evaluate the viscosity of liquid metals and alloys. However, the parameter of mixed energy in these models is not easily determined, and most predictive models have been poorly applied. In the present study, a new thermodynamic parameter Δ G is proposed to predict liquid alloy viscosity. The prediction equation depends on basic physical and thermodynamic parameters, namely density, melting temperature, absolute atomic mass, electro-negativity, electron density, molar volume, Pauling radius, and mixing enthalpy. Our results show that the liquid alloy viscosity predicted using the proposed model is closely in line with the experimental values. In addition, if the component radius difference is greater than 0.03 nm at a certain temperature, the atomic size factor has a significant effect on the interaction of the binary liquid metal atoms. The proposed thermodynamic parameter Δ G also facilitates the study of other physical properties of liquid metals.
Increasing the relevance of GCM simulations for Climate Services
NASA Astrophysics Data System (ADS)
Smith, L. A.; Suckling, E.
2012-12-01
The design and interpretation of model simulations for climate services differ significantly from experimental design for the advancement of the fundamental research on predictability that underpins it. Climate services consider the sources of best information available today; this calls for a frank evaluation of model skill in the face of statistical benchmarks defined by empirical models. The fact that Physical simulation models are thought to provide the only reliable method for extrapolating into conditions not previously observed has no bearing on whether or not today's simulation models outperform empirical models. Evidence on the length scales on which today's simulation models fail to outperform empirical benchmarks is presented; it is illustrated that this occurs even on global scales in decadal prediction. At all timescales considered thus far (as of July 2012), predictions based on simulation models are improved by blending with the output of statistical models. Blending is shown to be more interesting in the climate context than it is in the weather context, where blending with a history-based climatology is straightforward. As GCMs improve and as the Earth's climate moves further from that of the last century, the skill from simulation models and their relevance to climate services is expected to increase. Examples from both seasonal and decadal forecasting will be used to discuss a third approach that may increase the role of current GCMs more quickly. Specifically, aspects of the experimental design in previous hind cast experiments are shown to hinder the use of GCM simulations for climate services. Alternative designs are proposed. The value in revisiting Thompson's classic approach to improving weather forecasting in the fifties in the context of climate services is discussed.
Moustafa, Ahmed A.; Wufong, Ella; Servatius, Richard J.; Pang, Kevin C. H.; Gluck, Mark A.; Myers, Catherine E.
2013-01-01
A recurrent-network model provides a unified account of the hippocampal region in mediating the representation of temporal information in classical eyeblink conditioning. Much empirical research is consistent with a general conclusion that delay conditioning (in which the conditioned stimulus CS and unconditioned stimulus US overlap and co-terminate) is independent of the hippocampal system, while trace conditioning (in which the CS terminates before US onset) depends on the hippocampus. However, recent studies show that, under some circumstances, delay conditioning can be hippocampal-dependent and trace conditioning can be spared following hippocampal lesion. Here, we present an extension of our prior trial-level models of hippocampal function and stimulus representation that can explain these findings within a unified framework. Specifically, the current model includes adaptive recurrent collateral connections that aid in the representation of intra-trial temporal information. With this model, as in our prior models, we argue that the hippocampus is not specialized for conditioned response timing, but rather is a general-purpose system that learns to predict the next state of all stimuli given the current state of variables encoded by activity in recurrent collaterals. As such, the model correctly predicts that hippocampal involvement in classical conditioning should be critical not only when there is an intervening trace interval, but also when there is a long delay between CS onset and US onset. Our model simulates empirical data from many variants of classical conditioning, including delay and trace paradigms in which the length of the CS, the inter-stimulus interval, or the trace interval is varied. Finally, we discuss model limitations, future directions, and several novel empirical predictions of this temporal processing model of hippocampal function and learning. PMID:23178699
Empirical membrane lifetime model for heavy duty fuel cell systems
NASA Astrophysics Data System (ADS)
Macauley, Natalia; Watson, Mark; Lauritzen, Michael; Knights, Shanna; Wang, G. Gary; Kjeang, Erik
2016-12-01
Heavy duty fuel cells used in transportation system applications such as transit buses expose the fuel cell membranes to conditions that can lead to lifetime-limiting membrane failure via combined chemical and mechanical degradation. Highly durable membranes and reliable predictive models are therefore needed in order to achieve the ultimate heavy duty fuel cell lifetime target of 25,000 h. In the present work, an empirical membrane lifetime model was developed based on laboratory data from a suite of accelerated membrane durability tests. The model considers the effects of cell voltage, temperature, oxygen concentration, humidity cycling, humidity level, and platinum in the membrane using inverse power law and exponential relationships within the framework of a general log-linear Weibull life-stress statistical distribution. The obtained model is capable of extrapolating the membrane lifetime from accelerated test conditions to use level conditions during field operation. Based on typical conditions for the Whistler, British Columbia fuel cell transit bus fleet, the model predicts a stack lifetime of 17,500 h and a membrane leak initiation time of 9200 h. Validation performed with the aid of a field operated stack confirmed the initial goal of the model to predict membrane lifetime within 20% of the actual operating time.
Predicting the stability of nanodevices
NASA Astrophysics Data System (ADS)
Lin, Z. Z.; Yu, W. F.; Wang, Y.; Ning, X. J.
2011-05-01
A simple model based on the statistics of single atoms is developed to predict the stability or lifetime of nanodevices without empirical parameters. Under certain conditions, the model produces the Arrhenius law and the Meyer-Neldel compensation rule. Compared with the classical molecular-dynamics simulations for predicting the stability of monatomic carbon chain at high temperature, the model is proved to be much more accurate than the transition state theory. Based on the ab initio calculation of the static potential, the model can give out a corrected lifetime of monatomic carbon and gold chains at higher temperature, and predict that the monatomic chains are very stable at room temperature.
NASA Astrophysics Data System (ADS)
Dhakal, A. S.; Adera, S.
2017-12-01
Accurate daily streamflow prediction in ungauged watersheds with sparse information is challenging. The ability of a hydrologic model calibrated using nearby gauged watersheds to predict streamflow accurately depends on hydrologic similarities between the gauged and ungauged watersheds. This study examines daily streamflow predictions using the Precipitation-Runoff Modeling System (PRMS) for the largely ungauged San Antonio Creek watershed, a 96 km2 sub-watershed of the Alameda Creek watershed in Northern California. The process-based PRMS model is being used to improve the accuracy of recent San Antonio Creek streamflow predictions generated by two empirical methods. Although San Antonio Creek watershed is largely ungauged, daily streamflow data exists for hydrologic years (HY) 1913 - 1930. PRMS was calibrated for HY 1913 - 1930 using streamflow data, modern-day land use and PRISM precipitation distribution, and gauged precipitation and temperature data from a nearby watershed. The PRMS model was then used to generate daily streamflows for HY 1996-2013, during which the watershed was ungauged, and hydrologic responses were compared to two nearby gauged sub-watersheds of Alameda Creek. Finally, the PRMS-predicted daily flows between HY 1996-2013 were compared to the two empirically-predicted streamflow time series: (1) the reservoir mass balance method and (2) correlation of historical streamflows from 80 - 100 years ago between San Antonio Creek and a nearby sub-watershed located in Alameda Creek. While the mass balance approach using reservoir storage and transfers is helpful for estimating inflows to the reservoir, large discrepancies in daily streamflow estimation can arise. Similarly, correlation-based predicted daily flows which rely on a relationship from flows collected 80-100 years ago may not represent current watershed hydrologic conditions. This study aims to develop a method of streamflow prediction in the San Antonio Creek watershed by examining PRMS's model outputs as well as empirically generated flow data for their use in water resources management decisions. PRMS is also being used to better understand the streamflow patterns in the San Antonio Creek watershed for a variety of antecedent soil moisture conditions as the creek is generally dry between late Spring and early Fall.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Topography and geology site effects from the intensity prediction model (ShakeMap) for Austria
NASA Astrophysics Data System (ADS)
del Puy Papí Isaba, María; Jia, Yan; Weginger, Stefan
2017-04-01
The seismicity in Austria can be categorized as moderated. Despite the fact that the hazard seems to be rather low, earthquakes can cause great damage and losses, specially in densely populated and industrialized areas. It is well known, that equations which predict intensity as a function of magnitude and distance, among other parameters, are useful tool for hazard and risk assessment. Therefore, this study aims to determine an empirical model of the ground shaking intensities (ShakeMap) of a series of earthquakes occurred in Austria between 1000 and 2014. Furthermore, the obtained empirical model will lead to further interpretation of both, contemporary and historical earthquakes. A total of 285 events, which epicenters were located in Austria, and a sum of 22.739 reported macreoseismic data points from Austria and adjoining countries, were used. These events are enclosed in the period 1000-2014 and characterized by having a local magnitude greater than 3. In the first state of the model development, the data was careful selected, e.g. solely intensities equal or greater than III were used. In a second state the data was adjusted to the selected empirical model. Finally, geology and topography corrections were obtained by means of the model residuals in order to derive intensity-based site amplification effects.
Statistical prediction of space motion sickness
NASA Technical Reports Server (NTRS)
Reschke, Millard F.
1990-01-01
Studies designed to empirically examine the etiology of motion sickness to develop a foundation for enhancing its prediction are discussed. Topics addressed include early attempts to predict space motion sickness, multiple test data base that uses provocative and vestibular function tests, and data base subjects; reliability of provocative tests of motion sickness susceptibility; prediction of space motion sickness using linear discriminate analysis; and prediction of space motion sickness susceptibility using the logistic model.
DEVELOPMENT OF THE VIRTUAL BEACH MODEL, PHASE 1: AN EMPIRICAL MODEL
With increasing attention focused on the use of multiple linear regression (MLR) modeling of beach fecal bacteria concentration, the validity of the entire statistical process should be carefully evaluated to assure satisfactory predictions. This work aims to identify pitfalls an...
O'Keefe, Victoria M; Wingate, LaRicka R; Tucker, Raymond P; Rhoades-Kerswill, Sarah; Slish, Meredith L; Davidson, Collin L
2014-01-01
American Indians (AIs) experience increased suicide rates compared with other groups in the United States. However, no past studies have examined AI suicide by way of a recent empirically supported theoretical model of suicide. The current study investigated whether AI suicidal ideation can be predicted by two components: thwarted belongingness and perceived burdensomeness, from the Interpersonal-Psychological Theory of Suicide (T. E. Joiner, 2005, Why people die by suicide. Cambridge, MA: Harvard University Press). One hundred seventy-one AIs representing 27 different tribes participated in an online survey. Hierarchical regression analyses showed that perceived burdensomeness significantly predicted suicidal ideation above and beyond demographic variables and depressive symptoms; however, thwarted belongingness did not. Additionally, the two-way interaction between thwarted belongingness and perceived burdensomeness significantly predicted suicidal ideation. These results provide initial support for continued research on the components of the Interpersonal-Psychological Theory of Suicide, an empirically supported theoretical model of suicide, to predict suicidal ideation among AI populations.
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Schreiber, Matthias R.; Zorotovic, Mónica; Iłkiewicz, Krystian; Hurley, Jarrod R.; Giersz, Mirek; Lagos, Felipe
2018-06-01
The predicted and observed space density of cataclysmic variables (CVs) have been for a long time discrepant by at least an order of magnitude. The standard model of CV evolution predicts that the vast majority of CVs should be period bouncers, whose space density has been recently measured to be ρ ≲ 2 × 10-5 pc-3. We performed population synthesis of CVs using an updated version of the Binary Stellar Evolution (BSE) code for single and binary star evolution. We find that the recently suggested empirical prescription of consequential angular momentum loss (CAML) brings into agreement predicted and observed space densities of CVs and period bouncers. To progress with our understanding of CV evolution it is crucial to understand the physical mechanism behind empirical CAML. Our changes to the BSE code are also provided in details, which will allow the community to accurately model mass transfer in interacting binaries in which degenerate objects accrete from low-mass main-sequence donor stars.
NASA Astrophysics Data System (ADS)
Monteys, Xavier; Harris, Paul; Caloca, Silvia
2014-05-01
The coastal shallow water zone can be a challenging and expensive environment within which to acquire bathymetry and other oceanographic data using traditional survey methods. Dangers and limited swath coverage make some of these areas unfeasible to survey using ship borne systems, and turbidity can preclude marine LIDAR. As a result, an extensive part of the coastline worldwide remains completely unmapped. Satellite EO multispectral data, after processing, allows timely, cost efficient and quality controlled information to be used for planning, monitoring, and regulating coastal environments. It has the potential to deliver repetitive derivation of medium resolution bathymetry, coastal water properties and seafloor characteristics in shallow waters. Over the last 30 years satellite passive imaging methods for bathymetry extraction, implementing analytical or empirical methods, have had a limited success predicting water depths. Different wavelengths of the solar light penetrate the water column to varying depths. They can provide acceptable results up to 20 m but become less accurate in deeper waters. The study area is located in the inner part of Dublin Bay, on the East coast of Ireland. The region investigated is a C-shaped inlet covering an area of 10 km long and 5 km wide with water depths ranging from 0 to 10 m. The methodology employed on this research uses a ratio of reflectance from SPOT 5 satellite bands, differing to standard linear transform algorithms. High accuracy water depths were derived using multibeam data. The final empirical model uses spatially weighted geographical tools to retrieve predicted depths. The results of this paper confirm that SPOT satellite scenes are suitable to predict depths using empirical models in very shallow embayments. Spatial regression models show better adjustments in the predictions over non-spatial models. The spatial regression equation used provides realistic results down to 6 m below the water surface, with reliable and error controlled depths. Bathymetric extraction approaches involving satellite imagery data are regarded as a fast, successful and economically advantageous solution to automatic water depth calculation in shallow and complex environments.
Process-based soil erodibility estimation for empirical water erosion models
USDA-ARS?s Scientific Manuscript database
A variety of modeling technologies exist for water erosion prediction each with specific parameters. It is of interest to scrutinize parameters of a particular model from the point of their compatibility with dataset of other models. In this research, functional relationships between soil erodibilit...
Investigation of pressure drop in capillary tube for mixed refrigerant Joule-Thomson cryocooler
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ardhapurkar, P. M.; Sridharan, Arunkumar; Atrey, M. D.
2014-01-29
A capillary tube is commonly used in small capacity refrigeration and air-conditioning systems. It is also a preferred expansion device in mixed refrigerant Joule-Thomson (MR J-T) cryocoolers, since it is inexpensive and simple in configuration. However, the flow inside a capillary tube is complex, since flashing process that occurs in case of refrigeration and air-conditioning systems is metastable. A mixture of refrigerants such as nitrogen, methane, ethane, propane and iso-butane expands below its inversion temperature in the capillary tube of MR J-T cryocooler and reaches cryogenic temperature. The mass flow rate of refrigerant mixture circulating through capillary tube depends onmore » the pressure difference across it. There are many empirical correlations which predict pressure drop across the capillary tube. However, they have not been tested for refrigerant mixtures and for operating conditions of the cryocooler. The present paper assesses the existing empirical correlations for predicting overall pressure drop across the capillary tube for the MR J-T cryocooler. The empirical correlations refer to homogeneous as well as separated flow models. Experiments are carried out to measure the overall pressure drop across the capillary tube for the cooler. Three different compositions of refrigerant mixture are used to study the pressure drop variations. The predicted overall pressure drop across the capillary tube is compared with the experimentally obtained value. The predictions obtained using homogeneous model show better match with the experimental results compared to separated flow models.« less
VERIFICATION AND VALIDATION OF THE SPARC MODEL
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values--that is, the physical and chemical constants that govern reactivity. Although empirical structure-activity relationships that allow estimation of some ...
Predictable patterns of the May-June rainfall anomaly over East Asia
NASA Astrophysics Data System (ADS)
Xing, Wen; Wang, Bin; Yim, So-Young; Ha, Kyung-Ja
2017-02-01
During early summer (May-June, MJ), East Asia (EA) subtropical front is a defining feature of Asian monsoon, which produces the most prominent precipitation band in the global subtropics. Here we show that dynamical prediction of early summer EA (20°N-45°N, 100°E-130°E) rainfall made by four coupled climate models' ensemble hindcast (1979-2010) yields only a moderate skill and cannot be used to estimate predictability. The present study uses an alternative, empirical orthogonal function (EOF)-based physical-empirical (P-E) model approach to predict rainfall anomaly pattern and estimate its potential predictability. The first three leading modes are physically meaningful and can be, respectively, attributed to (a) the interaction between the anomalous western North Pacific subtropical high and underlying Indo-Pacific warm ocean, (b) the forcing associated with North Pacific sea surface temperature (SST) anomaly, and (c) the development of equatorial central Pacific SST anomalies. A suite of P-E models is established to forecast the first three leading principal components. All predictors are 0 month ahead of May, so the prediction here is named as a 0 month lead prediction. The cross-validated hindcast results demonstrate that these modes may be predicted with significant temporal correlation skills (0.48-0.72). Using the predicted principal components and the corresponding EOF patterns, the total MJ rainfall anomaly was hindcasted for the period of 1979-2015. The time-mean pattern correlation coefficient (PCC) score reaches 0.38, which is significantly higher than dynamical models' multimodel ensemble skill (0.21). The estimated potential maximum attainable PCC is around 0.65, suggesting that the dynamical prediction models may have large rooms to improve. Limitations and future work are discussed.
Monsoons: Processes, predictability, and the prospects for prediction
NASA Astrophysics Data System (ADS)
Webster, P. J.; Magaña, V. O.; Palmer, T. N.; Shukla, J.; Thomas, R. A.; Yanai, M.; Yasunari, T.
1998-06-01
The Tropical Ocean-Global Atmosphere (TOGA) program sought to determine the predictability of the coupled ocean-atmosphere system. The World Climate Research Programme's (WCRP) Global Ocean-Atmosphere-Land System (GOALS) program seeks to explore predictability of the global climate system through investigation of the major planetary heat sources and sinks, and interactions between them. The Asian-Australian monsoon system, which undergoes aperiodic and high amplitude variations on intraseasonal, annual, biennial and interannual timescales is a major focus of GOALS. Empirical seasonal forecasts of the monsoon have been made with moderate success for over 100 years. More recent modeling efforts have not been successful. Even simulation of the mean structure of the Asian monsoon has proven elusive and the observed ENSO-monsoon relationships has been difficult to replicate. Divergence in simulation skill occurs between integrations by different models or between members of ensembles of the same model. This degree of spread is surprising given the relative success of empirical forecast techniques. Two possible explanations are presented: difficulty in modeling the monsoon regions and nonlinear error growth due to regional hydrodynamical instabilities. It is argued that the reconciliation of these explanations is imperative for prediction of the monsoon to be improved. To this end, a thorough description of observed monsoon variability and the physical processes that are thought to be important is presented. Prospects of improving prediction and some strategies that may help achieve improvement are discussed.
Empirical Evaluation of Hunk Metrics as Bug Predictors
NASA Astrophysics Data System (ADS)
Ferzund, Javed; Ahsan, Syed Nadeem; Wotawa, Franz
Reducing the number of bugs is a crucial issue during software development and maintenance. Software process and product metrics are good indicators of software complexity. These metrics have been used to build bug predictor models to help developers maintain the quality of software. In this paper we empirically evaluate the use of hunk metrics as predictor of bugs. We present a technique for bug prediction that works at smallest units of code change called hunks. We build bug prediction models using random forests, which is an efficient machine learning classifier. Hunk metrics are used to train the classifier and each hunk metric is evaluated for its bug prediction capabilities. Our classifier can classify individual hunks as buggy or bug-free with 86 % accuracy, 83 % buggy hunk precision and 77% buggy hunk recall. We find that history based and change level hunk metrics are better predictors of bugs than code level hunk metrics.
Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian
2014-01-14
The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watson, R.
Waterflooding is the most commonly used secondary oil recovery technique. One of the requirements for understanding waterflood performance is a good knowledge of the basic properties of the reservoir rocks. This study is aimed at correlating rock-pore characteristics to oil recovery from various reservoir rock types and incorporating these properties into empirical models for Predicting oil recovery. For that reason, this report deals with the analyses and interpretation of experimental data collected from core floods and correlated against measurements of absolute permeability, porosity. wettability index, mercury porosimetry properties and irreducible water saturation. The results of the radial-core the radial-core andmore » linear-core flow investigations and the other associated experimental analyses are presented and incorporated into empirical models to improve the predictions of oil recovery resulting from waterflooding, for sandstone and limestone reservoirs. For the radial-core case, the standardized regression model selected, based on a subset of the variables, predicted oil recovery by waterflooding with a standard deviation of 7%. For the linear-core case, separate models are developed using common, uncommon and combination of both types of rock properties. It was observed that residual oil saturation and oil recovery are better predicted with the inclusion of both common and uncommon rock/fluid properties into the predictive models.« less
Ultrasonic nondestructive evaluation, microstructure, and mechanical property interrelations
NASA Technical Reports Server (NTRS)
Vary, A.
1984-01-01
Ultrasonic techniques for mechanical property characterizations are reviewed and conceptual models are advanced for explaining and interpreting the empirically based results. At present, the technology is generally empirically based and is emerging from the research laboratory. Advancement of the technology will require establishment of theoretical foundations for the experimentally observed interrelations among ultrasonic measurements, mechanical properties, and microstructure. Conceptual models are applied to ultrasonic assessment of fracture toughness to illustrate an approach for predicting correlations found among ultrasonic measurements, microstructure, and mechanical properties.
Bounds on quantum confinement effects in metal nanoparticles
NASA Astrophysics Data System (ADS)
Blackman, G. Neal; Genov, Dentcho A.
2018-03-01
Quantum size effects on the permittivity of metal nanoparticles are investigated using the quantum box model. Explicit upper and lower bounds are derived for the permittivity and relaxation rates due to quantum confinement effects. These bounds are verified numerically, and the size dependence and frequency dependence of the empirical Drude size parameter is extracted from the model. Results suggest that the common practice of empirically modifying the dielectric function can lead to inaccurate predictions for highly uniform distributions of finite-sized particles.
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
Effects of temperature on consumer-resource interactions.
Amarasekare, Priyanga
2015-05-01
Understanding how temperature variation influences the negative (e.g. self-limitation) and positive (e.g. saturating functional responses) feedback processes that characterize consumer-resource interactions is an important research priority. Previous work on this topic has yielded conflicting outcomes with some studies predicting that warming should increase consumer-resource oscillations and others predicting that warming should decrease consumer-resource oscillations. Here, I develop a consumer-resource model that both synthesizes previous findings in a common framework and yields novel insights about temperature effects on consumer-resource dynamics. I report three key findings. First, when the resource species' birth rate exhibits a unimodal temperature response, as demonstrated by a large number of empirical studies, the temperature range over which the consumer-resource interaction can persist is determined by the lower and upper temperature limits to the resource species' reproduction. This contrasts with the predictions of previous studies, which assume that the birth rate exhibits a monotonic temperature response, that consumer extinction is determined by temperature effects on consumer species' traits, rather than the resource species' traits. Secondly, the comparative analysis I have conducted shows that whether warming leads to an increase or decrease in consumer-resource oscillations depends on the manner in which temperature affects intraspecific competition. When the strength of self-limitation increases monotonically with temperature, warming causes a decrease in consumer-resource oscillations. However, if self-limitation is strongest at temperatures physiologically optimal for reproduction, a scenario previously unanalysed by theory but amply substantiated by empirical data, warming can cause an increase in consumer-resource oscillations. Thirdly, the model yields testable comparative predictions about consumer-resource dynamics under alternative hypotheses for how temperature affects competitive and resource acquisition traits. Importantly, it does so through empirically quantifiable metrics for predicting temperature effects on consumer viability and consumer-resource oscillations, which obviates the need for parameterizing complex dynamical models. Tests of these metrics with empirical data on a host-parasitoid interaction yield realistic estimates of temperature limits for consumer persistence and the propensity for consumer-resource oscillations, highlighting their utility in predicting temperature effects, particularly warming, on consumer-resource interactions in both natural and agricultural settings. © 2014 The Author. Journal of Animal Ecology © 2014 British Ecological Society.
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.; ...
2017-11-21
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
Selecting an Informative/Discriminating Multivariate Response for Inverse Prediction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.
nverse prediction is important in a wide variety of scientific and engineering contexts. One might use inverse prediction to predict fundamental properties/characteristics of an object using measurements obtained from it. This can be accomplished by “inverting” parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are science based; but often, forward models are empirically based, using the results of experimentation. For empirically-based forward models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationship between factorsmore » and responses, experimenters can influence control of the type, accuracy, and precision of forward models that can be constructed via selection of factors, factor levels, and the set of trials that are performed. Whether the forward models are based on science, experiments or both, researchers can influence the ability to perform inverse prediction by selecting informative response variables. By using an errors-in-variables framework for inverse prediction, this paper shows via simple analysis and examples how the capability of a multivariate response (with respect to being informative and discriminating) can vary depending on how well the various responses complement one another over the range of the factor-space of interest. Insights derived from this analysis could be useful for selecting a set of response variables among candidates in cases where the number of response variables that can be acquired is limited by difficulty, expense, and/or availability of material.« less
Daniel A. Yaussy
2000-01-01
Two individual-tree growth simulators are used to predict the growth and mortality on a 30-year-old forest site and an 80-year-old forest site in eastern Kentucky. The empirical growth and yield model (NE-TWIGS) was developed to simulate short-term (
Kinetic rate constant prediction supports the conformational selection mechanism of protein binding.
Moal, Iain H; Bates, Paul A
2012-01-01
The prediction of protein-protein kinetic rate constants provides a fundamental test of our understanding of molecular recognition, and will play an important role in the modeling of complex biological systems. In this paper, a feature selection and regression algorithm is applied to mine a large set of molecular descriptors and construct simple models for association and dissociation rate constants using empirical data. Using separate test data for validation, the predicted rate constants can be combined to calculate binding affinity with accuracy matching that of state of the art empirical free energy functions. The models show that the rate of association is linearly related to the proportion of unbound proteins in the bound conformational ensemble relative to the unbound conformational ensemble, indicating that the binding partners must adopt a geometry near to that of the bound prior to binding. Mirroring the conformational selection and population shift mechanism of protein binding, the models provide a strong separate line of evidence for the preponderance of this mechanism in protein-protein binding, complementing structural and theoretical studies.
Covariations in ecological scaling laws fostered by community dynamics.
Zaoli, Silvia; Giometto, Andrea; Maritan, Amos; Rinaldo, Andrea
2017-10-03
Scaling laws in ecology, intended both as functional relationships among ecologically relevant quantities and the probability distributions that characterize their occurrence, have long attracted the interest of empiricists and theoreticians. Empirical evidence exists of power laws associated with the number of species inhabiting an ecosystem, their abundances, and traits. Although their functional form appears to be ubiquitous, empirical scaling exponents vary with ecosystem type and resource supply rate. The idea that ecological scaling laws are linked has been entertained before, but the full extent of macroecological pattern covariations, the role of the constraints imposed by finite resource supply, and a comprehensive empirical verification are still unexplored. Here, we propose a theoretical scaling framework that predicts the linkages of several macroecological patterns related to species' abundances and body sizes. We show that such a framework is consistent with the stationary-state statistics of a broad class of resource-limited community dynamics models, regardless of parameterization and model assumptions. We verify predicted theoretical covariations by contrasting empirical data and provide testable hypotheses for yet unexplored patterns. We thus place the observed variability of ecological scaling exponents into a coherent statistical framework where patterns in ecology embed constrained fluctuations.
A physical-based gas-surface interaction model for rarefied gas flow simulation
NASA Astrophysics Data System (ADS)
Liang, Tengfei; Li, Qi; Ye, Wenjing
2018-01-01
Empirical gas-surface interaction models, such as the Maxwell model and the Cercignani-Lampis model, are widely used as the boundary condition in rarefied gas flow simulations. The accuracy of these models in the prediction of macroscopic behavior of rarefied gas flows is less satisfactory in some cases especially the highly non-equilibrium ones. Molecular dynamics simulation can accurately resolve the gas-surface interaction process at atomic scale, and hence can predict accurate macroscopic behavior. They are however too computationally expensive to be applied in real problems. In this work, a statistical physical-based gas-surface interaction model, which complies with the basic relations of boundary condition, is developed based on the framework of the washboard model. In virtue of its physical basis, this new model is capable of capturing some important relations/trends for which the classic empirical models fail to model correctly. As such, the new model is much more accurate than the classic models, and in the meantime is more efficient than MD simulations. Therefore, it can serve as a more accurate and efficient boundary condition for rarefied gas flow simulations.
Creasy, Arch; Reck, Jason; Pabst, Timothy; Hunter, Alan; Barker, Gregory; Carta, Giorgio
2018-05-29
A previously developed empirical interpolation (EI) method is extended to predict highly overloaded multicomponent elution behavior on a cation exchange (CEX) column based on batch isotherm data. Instead of a fully mechanistic model, the EI method employs an empirically modified multicomponent Langmuir equation to correlate two-component adsorption isotherm data at different salt concentrations. Piecewise cubic interpolating polynomials are then used to predict competitive binding at intermediate salt concentrations. The approach is tested for the separation of monoclonal antibody monomer and dimer mixtures by gradient elution on the cation exchange resin Nuvia HR-S. Adsorption isotherms are obtained over a range of salt concentrations with varying monomer and dimer concentrations. Coupled with a lumped kinetic model, the interpolated isotherms predict the column behavior for highly overloaded conditions. Predictions based on the EI method showed good agreement with experimental elution curves for protein loads up to 40 mg/mL column or about 50% of the column binding capacity. The approach can be extended to other chromatographic modalities and to more than two components. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Shevade, Abhijit V.; Ryan, Margaret A.; Homer, Margie L.; Zhou, Hanying; Manfreda, Allison M.; Lara, Liana M.; Yen, Shiao-Pin S.; Jewell, April D.; Manatt, Kenneth S.; Kisor, Adam K.
We have developed a Quantitative Structure-Activity Relationships (QSAR) based approach to correlate the response of chemical sensors in an array with molecular descriptors. A novel molecular descriptor set has been developed; this set combines descriptors of sensing film-analyte interactions, representing sensor response, with a basic analyte descriptor set commonly used in QSAR studies. The descriptors are obtained using a combination of molecular modeling tools and empirical and semi-empirical Quantitative Structure-Property Relationships (QSPR) methods. The sensors under investigation are polymer-carbon sensing films which have been exposed to analyte vapors at parts-per-million (ppm) concentrations; response is measured as change in film resistance. Statistically validated QSAR models have been developed using Genetic Function Approximations (GFA) for a sensor array for a given training data set. The applicability of the sensor response models has been tested by using it to predict the sensor activities for test analytes not considered in the training set for the model development. The validated QSAR sensor response models show good predictive ability. The QSAR approach is a promising computational tool for sensing materials evaluation and selection. It can also be used to predict response of an existing sensing film to new target analytes.
Rein, David B
2005-01-01
Objective To stratify traditional risk-adjustment models by health severity classes in a way that is empirically based, is accessible to policy makers, and improves predictions of inpatient costs. Data Sources Secondary data created from the administrative claims from all 829,356 children aged 21 years and under enrolled in Georgia Medicaid in 1999. Study Design A finite mixture model was used to assign child Medicaid patients to health severity classes. These class assignments were then used to stratify both portions of a traditional two-part risk-adjustment model predicting inpatient Medicaid expenditures. Traditional model results were compared with the stratified model using actuarial statistics. Principal Findings The finite mixture model identified four classes of children: a majority healthy class and three illness classes with increasing levels of severity. Stratifying the traditional two-part risk-adjustment model by health severity classes improved its R2 from 0.17 to 0.25. The majority of additional predictive power resulted from stratifying the second part of the two-part model. Further, the preference for the stratified model was unaffected by months of patient enrollment time. Conclusions Stratifying health care populations based on measures of health severity is a powerful method to achieve more accurate cost predictions. Insurers who ignore the predictive advances of sample stratification in setting risk-adjusted premiums may create strong financial incentives for adverse selection. Finite mixture models provide an empirically based, replicable methodology for stratification that should be accessible to most health care financial managers. PMID:16033501
D. M. Jimenez; B. W. Butler; J. Reardon
2003-01-01
Current methods for predicting fire-induced plant mortality in shrubs and trees are largely empirical. These methods are not readily linked to duff burning, soil heating, and surface fire behavior models. In response to the need for a physics-based model of this process, a detailed model for predicting the temperature distribution through a tree stem as a function of...
Computer programs to predict induced effects of jets exhausting into a crossflow
NASA Technical Reports Server (NTRS)
Perkins, S. C., Jr.; Mendenhall, M. R.
1984-01-01
A user's manual for two computer programs was developed to predict the induced effects of jets exhausting into a crossflow. Program JETPLT predicts pressures induced on an infinite flat plate by a jet exhausting at angles to the plate and Program JETBOD, in conjunction with a panel code, predicts pressures induced on a body of revolution by a jet exhausting normal to the surface. Both codes use a potential model of the jet and adjacent surface with empirical corrections for the viscous or nonpotential effects. This program manual contains a description of the use of both programs, instructions for preparation of input, descriptions of the output, limitations of the codes, and sample cases. In addition, procedures to extend both codes to include additional empirical correlations are described.
Towards a universal model for carbon dioxide uptake by plants
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.; ...
2017-09-04
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Towards a universal model for carbon dioxide uptake by plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Han; Prentice, I. Colin; Keenan, Trevor F.
Gross primary production (GPP) - the uptake of carbon dioxide (CO 2) by leaves, and its conversion to sugars by photosynthesis - is the basis for life on land. Earth System Models (ESMs) incorporating the interactions of land ecosystems and climate are used to predict the future of the terrestrial sink for anthropogenic CO 2. ESMs require accurate representation of GPP. However, current ESMs disagree on how GPP responds to environmental variations, suggesting a need for a more robust theoretical framework for modelling. Here in this work, we focus on a key quantity for GPP, the ratio of leaf internalmore » to external CO 2 (χ). χ is tightly regulated and depends on environmental conditions, but is represented empirically and incompletely in today's models. We show that a simple evolutionary optimality hypothesis predicts specific quantitative dependencies of χ on temperature, vapour pressure deficit and elevation; and that these same dependencies emerge from an independent analysis of empirical χ values, derived from a worldwide dataset of >3,500 leaf stable carbon isotope measurements. A single global equation embodying these relationships then unifies the empirical light-use efficiency model with the standard model of C 3 photosynthesis, and successfully predicts GPP measured at eddy-covariance flux sites. This success is notable given the equation's simplicity and broad applicability across biomes and plant functional types. Finally, it provides a theoretical underpinning for the analysis of plant functional coordination across species and emergent properties of ecosystems, and a potential basis for the reformulation of the controls of GPP in next-generation ESMs.« less
Lamers, L M
1999-01-01
OBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness funds provide compulsory health insurance coverage for the 60 percent of the population in the lowest income brackets. STUDY DESIGN: A demographic model and DCG capitation models are estimated by means of ordinary least squares, with an individual's annual healthcare expenditures in 1994 as the dependent variable. For subgroups based on health survey information, costs predicted by the models are compared with actual costs. Using stepwise regression procedures a subset of relevant survey variables that could improve the predictive accuracy of the three-year DCG model was identified. Capitation models were extended with these variables. DATA COLLECTION/EXTRACTION METHODS: For the empirical analysis, panel data of sickness fund members were used that contained demographic information, annual healthcare expenditures, and diagnostic information from hospitalizations for each member. In 1993, a mailed health survey was conducted among a random sample of 15,000 persons in the panel data set, with a 70 percent response rate. PRINCIPAL FINDINGS: The predictive accuracy of the demographic model improves when it is extended with diagnostic information from prior hospitalizations (DCGs). A subset of survey variables further improves the predictive accuracy of the DCG capitation models. The predictable profits and losses based on survey information for the DCG models are smaller than for the demographic model. Most persons with predictable losses based on health survey information were not hospitalized in the preceding year. CONCLUSIONS: The use of diagnostic information from prior hospitalizations is a promising option for improving the demographic capitation payment formula. This study suggests that diagnostic information from outpatient utilization is complementary to DCGs in predicting future costs. PMID:10029506
A probabilistic process model for pelagic marine ecosystems informed by Bayesian inverse analysis
Marine ecosystems are complex systems with multiple pathways that produce feedback cycles, which may lead to unanticipated effects. Models abstract this complexity and allow us to predict, understand, and hypothesize. In ecological models, however, the paucity of empirical data...
Bayesian model reduction and empirical Bayes for group (DCM) studies
Friston, Karl J.; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E.; van Wijk, Bernadette C.M.; Ziegler, Gabriel; Zeidman, Peter
2016-01-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level – e.g., dynamic causal models – and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. PMID:26569570
MERGANSER- Predicting Mercury Levels in Fish and Loons in New England Lakes
MERGANSER (MERcury Geo-spatial AssesmentS for the New England Region) is an empirical least squares multiple regression model using atmospheric deposition of mercury (Hg) and readily obtainable lake and watershed features to predict fish and common loon Hg (as methyl mercury) in ...
DOT National Transportation Integrated Search
2010-08-01
This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...
An empirical model to forecast solar wind velocity through statistical modeling
NASA Astrophysics Data System (ADS)
Gao, Y.; Ridley, A. J.
2013-12-01
The accurate prediction of the solar wind velocity has been a major challenge in the space weather community. Previous studies proposed many empirical and semi-empirical models to forecast the solar wind velocity based on either the historical observations, e.g. the persistence model, or the instantaneous observations of the sun, e.g. the Wang-Sheeley-Arge model. In this study, we use the one-minute WIND data from January 1995 to August 2012 to investigate and compare the performances of 4 models often used in literature, here referred to as the null model, the persistence model, the one-solar-rotation-ago model, and the Wang-Sheeley-Arge model. It is found that, measured by root mean square error, the persistence model gives the most accurate predictions within two days. Beyond two days, the Wang-Sheeley-Arge model serves as the best model, though it only slightly outperforms the null model and the one-solar-rotation-ago model. Finally, we apply the least-square regression to linearly combine the null model, the persistence model, and the one-solar-rotation-ago model to propose a 'general persistence model'. By comparing its performance against the 4 aforementioned models, it is found that the accuracy of the general persistence model outperforms the other 4 models within five days. Due to its great simplicity and superb performance, we believe that the general persistence model can serve as a benchmark in the forecast of solar wind velocity and has the potential to be modified to arrive at better models.
NASA Astrophysics Data System (ADS)
Bora, S. S.; Scherbaum, F.; Kuehn, N. M.; Stafford, P.; Edwards, B.
2014-12-01
In a probabilistic seismic hazard assessment (PSHA) framework, it still remains a challenge to adjust ground motion prediction equations (GMPEs) for application in different seismological environments. In this context, this study presents a complete framework for the development of a response spectral GMPE easily adjustable to different seismological conditions; and which does not suffer from the technical problems associated with the adjustment in response spectral domain. Essentially, the approach consists of an empirical FAS (Fourier Amplitude Spectrum) model and a duration model for ground motion which are combined within the random vibration theory (RVT) framework to obtain the full response spectral ordinates. Additionally, FAS corresponding to individual acceleration records are extrapolated beyond the frequency range defined by the data using the stochastic FAS model, obtained by inversion as described in Edwards & Faeh, (2013). To that end, an empirical model for a duration, which is tuned to optimize the fit between RVT based and observed response spectral ordinate, at each oscillator frequency is derived. Although, the main motive of the presented approach was to address the adjustability issues of response spectral GMPEs; comparison, of median predicted response spectra with the other regional models indicate that presented approach can also be used as a stand-alone model. Besides that, a significantly lower aleatory variability (σ<0.5 in log units) in comparison to other regional models, at shorter periods brands it to a potentially viable alternative to the classical regression (on response spectral ordinates) based GMPEs for seismic hazard studies in the near future. The dataset used for the presented analysis is a subset of the recently compiled database RESORCE-2012 across Europe, Middle East and the Mediterranean region.
NASA Astrophysics Data System (ADS)
Ishtiaq, K. S.; Abdul-Aziz, O. I.
2014-12-01
We developed a scaling-based, simple empirical model for spatio-temporally robust prediction of the diurnal cycles of wetland net ecosystem exchange (NEE) by using an extended stochastic harmonic algorithm (ESHA). A reference-time observation from each diurnal cycle was utilized as the scaling parameter to normalize and collapse hourly observed NEE of different days into a single, dimensionless diurnal curve. The modeling concept was tested by parameterizing the unique diurnal curve and predicting hourly NEE of May to October (summer growing and fall seasons) between 2002-12 for diverse wetland ecosystems, as available in the U.S. AmeriFLUX network. As an example, the Taylor Slough short hydroperiod marsh site in the Florida Everglades had data for four consecutive growing seasons from 2009-12; results showed impressive modeling efficiency (coefficient of determination, R2 = 0.66) and accuracy (ratio of root-mean-square-error to the standard deviation of observations, RSR = 0.58). Model validation was performed with an independent year of NEE data, indicating equally impressive performance (R2 = 0.68, RSR = 0.57). The model included a parsimonious set of estimated parameters, which exhibited spatio-temporal robustness by collapsing onto narrow ranges. Model robustness was further investigated by analytically deriving and quantifying parameter sensitivity coefficients and a first-order uncertainty measure. The relatively robust, empirical NEE model can be applied for simulating continuous (e.g., hourly) NEE time-series from a single reference observation (or a set of limited observations) at different wetland sites of comparable hydro-climatology, biogeochemistry, and ecology. The method can also be used for a robust gap-filling of missing data in observed time-series of periodic ecohydrological variables for wetland or other ecosystems.
True Density Prediction of Garlic Slices Dehydrated by Convection.
López-Ortiz, Anabel; Rodríguez-Ramírez, Juan; Méndez-Lagunas, Lilia
2016-01-01
Physiochemical parameters with constant values are employed for the mass-heat transfer modeling of the air drying process. However, structural properties are not constant under drying conditions. Empirical, semi-theoretical, and theoretical models have been proposed to describe true density (ρp). These models only consider the ideal behavior and assume a linear relationship between ρp and moisture content (X); nevertheless, some materials exhibit a nonlinear behavior of ρp as a function of X with a tendency toward being concave-down. This comportment, which can be observed in garlic and carrots, has been difficult to model mathematically. This work proposes a semi-theoretical model for predicting ρp values, taking into account the concave-down comportment that occurs at the end of the drying process. The model includes the ρs dependency on external conditions (air drying temperature (Ta)), the inside temperature of the garlic slices (Ti ), and the moisture content (X) obtained from experimental data on the drying process. Calculations show that the dry solid density (ρs ) is not a linear function of Ta, X, and Ti . An empirical correlation for ρs is proposed as a function of Ti and X. The adjustment equation for Ti is proposed as a function of Ta and X. The proposed model for ρp was validated using experimental data on the sliced garlic and was compared with theoretical and empirical models that are available in the scientific literature. Deviation between the experimental and predicted data was determined. An explanation of the nonlinear behavior of ρs and ρp in the function of X, taking into account second-order phase changes, are then presented. © 2015 Institute of Food Technologists®
Karimi, Leila; Ghassemi, Abbas
2016-07-01
Among the different technologies developed for desalination, the electrodialysis/electrodialysis reversal (ED/EDR) process is one of the most promising for treating brackish water with low salinity when there is high risk of scaling. Multiple researchers have investigated ED/EDR to optimize the process, determine the effects of operating parameters, and develop theoretical/empirical models. Previously published empirical/theoretical models have evaluated the effect of the hydraulic conditions of the ED/EDR on the limiting current density using dimensionless numbers. The reason for previous studies' emphasis on limiting current density is twofold: 1) to maximize ion removal, most ED/EDR systems are operated close to limiting current conditions if there is not a scaling potential in the concentrate chamber due to a high concentration of less-soluble salts; and 2) for modeling the ED/EDR system with dimensionless numbers, it is more accurate and convenient to use limiting current density, where the boundary layer's characteristics are known at constant electrical conditions. To improve knowledge of ED/EDR systems, ED/EDR models should be also developed for the Ohmic region, where operation reduces energy consumption, facilitates targeted ion removal, and prolongs membrane life compared to limiting current conditions. In this paper, theoretical/empirical models were developed for ED/EDR performance in a wide range of operating conditions. The presented ion removal and selectivity models were developed for the removal of monovalent ions and divalent ions utilizing the dominant dimensionless numbers obtained from laboratory scale electrodialysis experiments. At any system scale, these models can predict ED/EDR performance in terms of monovalent and divalent ion removal. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prediction of pressure drop in fluid tuned mounts using analytical and computational techniques
NASA Technical Reports Server (NTRS)
Lasher, William C.; Khalilollahi, Amir; Mischler, John; Uhric, Tom
1993-01-01
A simplified model for predicting pressure drop in fluid tuned isolator mounts was developed. The model is based on an exact solution to the Navier-Stokes equations and was made more general through the use of empirical coefficients. The values of these coefficients were determined by numerical simulation of the flow using the commercial computational fluid dynamics (CFD) package FIDAP.
Modeling of ESD events from polymeric surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pfeifer, Kent Bryant
2014-03-01
Transient electrostatic discharge (ESD) events are studied to assemble a predictive model of discharge from polymer surfaces. An analog circuit simulation is produced and its response is compared to various literature sources to explore its capabilities and limitations. Results suggest that polymer ESD events can be predicted to within an order of magnitude. These results compare well to empirical findings from other sources having similar reproducibility.
The evolution of cooperative breeding in the African cichlid fish, Neolamprologus pulcher.
Wong, Marian; Balshine, Sigal
2011-05-01
The conundrum of why subordinate individuals assist dominants at the expense of their own direct reproduction has received much theoretical and empirical attention over the last 50 years. During this time, birds and mammals have taken centre stage as model vertebrate systems for exploring why helpers help. However, fish have great potential for enhancing our understanding of the generality and adaptiveness of helping behaviour because of the ease with which they can be experimentally manipulated under controlled laboratory and field conditions. In particular, the freshwater African cichlid, Neolamprologus pulcher, has emerged as a promising model species for investigating the evolution of cooperative breeding, with 64 papers published on this species over the past 27 years. Here we clarify current knowledge pertaining to the costs and benefits of helping in N. pulcher by critically assessing the existing empirical evidence. We then provide a comprehensive examination of the evidence pertaining to four key hypotheses for why helpers might help: (1) kin selection; (2) pay-to-stay; (3) signals of prestige; and (4) group augmentation. For each hypothesis, we outline the underlying theory, address the appropriateness of N. pulcher as a model species and describe the key predictions and associated empirical tests. For N. pulcher, we demonstrate that the kin selection and group augmentation hypotheses have received partial support. One of the key predictions of the pay-to-stay hypothesis has failed to receive any support despite numerous laboratory and field studies; thus as it stands, the evidence for this hypothesis is weak. There have been no empirical investigations addressing the key predictions of the signals of prestige hypothesis. By outlining the key predictions of the various hypotheses, and highlighting how many of these remain to be tested explicitly, our review can be regarded as a roadmap in which potential paths for future empirical research into the evolution of cooperative breeding are proposed. Overall, we clarify what is currently known about cooperative breeding in N. pulcher, address discrepancies among studies, caution against incorrect inferences that have been drawn over the years and suggest promising avenues for future research in fishes and other taxonomic groups. © 2010 The Authors. Biological Reviews © 2010 Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Khademian, Amir; Abdollahipour, Hamed; Bagherpour, Raheb; Faramarzi, Lohrasb
2017-10-01
In addition to the numerous planning and executive challenges, underground excavation in urban areas is always followed by certain destructive effects especially on the ground surface; ground settlement is the most important of these effects for which estimation there exist different empirical, analytical and numerical methods. Since geotechnical models are associated with considerable model uncertainty, this study characterized the model uncertainty of settlement estimation models through a systematic comparison between model predictions and past performance data derived from instrumentation. To do so, the amount of surface settlement induced by excavation of the Qom subway tunnel was estimated via empirical (Peck), analytical (Loganathan and Poulos) and numerical (FDM) methods; the resulting maximum settlement value of each model were 1.86, 2.02 and 1.52 cm, respectively. The comparison of these predicted amounts with the actual data from instrumentation was employed to specify the uncertainty of each model. The numerical model outcomes, with a relative error of 3.8%, best matched the reality and the analytical method, with a relative error of 27.8%, yielded the highest level of model uncertainty.
NASA Technical Reports Server (NTRS)
Townsend, Lawrence W.; Tripathi, Ram K.; Khan, Ferdous
1993-01-01
Cross-section predictions with semi-empirical nuclear fragmentation models from the Langley Research Center and the Naval Research Laboratory are compared with experimental data for the breakup of relativistic iron and argon projectile nuclei in various targets. Both these models are commonly used to provide fragmentation cross-section inputs into galactic cosmic ray transport codes for shielding and exposure analyses. Overall, the Langley model appears to yield better agreement with the experimental data.
NASA Astrophysics Data System (ADS)
Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
2011-07-01
We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.
Hindcasting of Equatorial Spread F Using Seasonal Empirical Models
NASA Astrophysics Data System (ADS)
Aswathy, R. P.; Manju, G.
2018-02-01
The role of gravity waves in modulating equatorial spread F (ESF) day-to-day variability is investigated using ionosonde data at Trivandrum (geographic coordinates, 8.5°N, 77°E; mean geomagnetic latitude -0.3°N) a magnetic equatorial location. A novel empirical model that incorporates the combined effects of electrodynamics and gravity waves in modulating ESF occurrence during autumnal equinox season was presented by Aswathy and Manju (2017). In the present study, the height variations of the requisite gravity wave seed perturbations for ESF are examined for the vernal equinoxes, summer solstices, and winter solstices of different years. Subsequently, the empirical model, incorporating the electrodynamical effects and the gravity wave modulation, valid for each of the seasons is developed. Accordingly, for each season, the threshold curve may be demarcated provided the solar flux index (F10.7) is known. The empirical models are validated using the data for high, moderate, and low solar activity years corresponding to each season. In the next stage, this model is to be fine tuned to facilitate the prediction of ESF well before its onset.
Mathematical and computational modeling simulation of solar drying Systems
USDA-ARS?s Scientific Manuscript database
Mathematical modeling of solar drying systems has the primary aim of predicting the required drying time for a given commodity, dryer type, and environment. Both fundamental (Fickian diffusion) and semi-empirical drying models have been applied to the solar drying of a variety of agricultural commo...
Key Questions in Building Defect Prediction Models in Practice
NASA Astrophysics Data System (ADS)
Ramler, Rudolf; Wolfmaier, Klaus; Stauder, Erwin; Kossak, Felix; Natschläger, Thomas
The information about which modules of a future version of a software system are defect-prone is a valuable planning aid for quality managers and testers. Defect prediction promises to indicate these defect-prone modules. However, constructing effective defect prediction models in an industrial setting involves a number of key questions. In this paper we discuss ten key questions identified in context of establishing defect prediction in a large software development project. Seven consecutive versions of the software system have been used to construct and validate defect prediction models for system test planning. Furthermore, the paper presents initial empirical results from the studied project and, by this means, contributes answers to the identified questions.
Empirical source strength correlations for rans-based acoustic analogy methods
NASA Astrophysics Data System (ADS)
Kube-McDowell, Matthew Tyndall
JeNo is a jet noise prediction code based on an acoustic analogy method developed by Mani, Gliebe, Balsa, and Khavaran. Using the flow predictions from a standard Reynolds-averaged Navier-Stokes computational fluid dynamics solver, JeNo predicts the overall sound pressure level and angular spectra for high-speed hot jets over a range of observer angles, with a processing time suitable for rapid design purposes. JeNo models the noise from hot jets as a combination of two types of noise sources; quadrupole sources dependent on velocity fluctuations, which represent the major noise of turbulent mixing, and dipole sources dependent on enthalpy fluctuations, which represent the effects of thermal variation. These two sources are modeled by JeNo as propagating independently into the far-field, with no cross-correlation at the observer location. However, high-fidelity computational fluid dynamics solutions demonstrate that this assumption is false. In this thesis, the theory, assumptions, and limitations of the JeNo code are briefly discussed, and a modification to the acoustic analogy method is proposed in which the cross-correlation of the two primary noise sources is allowed to vary with the speed of the jet and the observer location. As a proof-of-concept implementation, an empirical correlation correction function is derived from comparisons between JeNo's noise predictions and a set of experimental measurements taken for the Air Force Aero-Propulsion Laboratory. The empirical correlation correction is then applied to JeNo's predictions of a separate data set of hot jets tested at NASA's Glenn Research Center. Metrics are derived to measure the qualitative and quantitative performance of JeNo's acoustic predictions, and the empirical correction is shown to provide a quantitative improvement in the noise prediction at low observer angles with no freestream flow, and a qualitative improvement in the presence of freestream flow. However, the results also demonstrate that there are underlying flaws in JeNo's ability to predict the behavior of a hot jet's acoustic signature at certain rear observer angles, and that this correlation correction is not able to correct these flaws.
Efforts are underway to transform regulatory toxicology and chemical safety assessment from a largely empirical science based on direct observation of apical toxicity outcomes in whole organism toxicity tests to a predictive one in which outcomes and risk are inferred from accumu...
Modeling NAPL dissolution from pendular rings in idealized porous media
NASA Astrophysics Data System (ADS)
Huang, Junqi; Christ, John A.; Goltz, Mark N.; Demond, Avery H.
2015-10-01
The dissolution rate of nonaqueous phase liquid (NAPL) often governs the remediation time frame at subsurface hazardous waste sites. Most formulations for estimating this rate are empirical and assume that the NAPL is the nonwetting fluid. However, field evidence suggests that some waste sites might be organic wet. Thus, formulations that assume the NAPL is nonwetting may be inappropriate for estimating the rates of NAPL dissolution. An exact solution to the Young-Laplace equation, assuming NAPL resides as pendular rings around the contact points of porous media idealized as spherical particles in a hexagonal close packing arrangement, is presented in this work to provide a theoretical prediction for NAPL-water interfacial area. This analytic expression for interfacial area is then coupled with an exact solution to the advection-diffusion equation in a capillary tube assuming Hagen-Poiseuille flow to provide a theoretical means of calculating the mass transfer rate coefficient for dissolution at the NAPL-water interface in an organic-wet system. A comparison of the predictions from this theoretical model with predictions from empirically derived formulations from the literature for water-wet systems showed a consistent range of values for the mass transfer rate coefficient, despite the significant differences in model foundations (water wetting versus NAPL wetting, theoretical versus empirical). This finding implies that, under these system conditions, the important parameter is interfacial area, with a lesser role played by NAPL configuration.
NASA Astrophysics Data System (ADS)
Chung, Jen-Kuang
2013-09-01
A stochastic method called the random vibration theory (Boore, 1983) has been used to estimate the peak ground motions caused by shallow moderate-to-large earthquakes in the Taiwan area. Adopting Brune's ω-square source spectrum, attenuation models for PGA and PGV were derived from path-dependent parameters which were empirically modeled from about one thousand accelerograms recorded at reference sites mostly located in a mountain area and which have been recognized as rock sites without soil amplification. Consequently, the predicted horizontal peak ground motions at the reference sites, are generally comparable to these observed. A total number of 11,915 accelerograms recorded from 735 free-field stations of the Taiwan Strong Motion Network (TSMN) were used to estimate the site factors by taking the motions from the predictive models as references. Results from soil sites reveal site amplification factors of approximately 2.0 ~ 3.5 for PGA and about 1.3 ~ 2.6 for PGV. Finally, as a result of amplitude corrections with those empirical site factors, about 75% of analyzed earthquakes are well constrained in ground motion predictions, having average misfits ranging from 0.30 to 0.50. In addition, two simple indices, R 0.57 and R 0.38, are proposed in this study to evaluate the validity of intensity map prediction for public information reports. The average percentages of qualified stations for peak acceleration residuals less than R 0.57 and R 0.38 can reach 75% and 54%, respectively, for most earthquakes. Such a performance would be good enough to produce a faithful intensity map for a moderate scenario event in the Taiwan region.
Coastal geomorphology through the looking glass
NASA Astrophysics Data System (ADS)
Sherman, Douglas J.; Bauer, Bernard O.
1993-07-01
Coastal geomorphology will gain future prominence as environmentally sound coastal zone management strategies, requiring scientific information, begin to supplant engineered shoreline stabilization schemes for amelioration of coastal hazards. We anticipate substantial change and progress over the next two decades, but we do not predict revolutionary advances in theoretical understanding of coastal geomorphic systems. Paradigm shifts will not occur; knowledge will advance incrementally. We offer predictions for specific coastal systems delineated according to scale. For the surf zone, we predict advances in wave shoaling theory, but not for wave breaking. We also predict greater understanding of turbulent processes, and substantive improvements in surf-zone circulation and radiation stress models. Very few of these improvements are expected to be incorporated in geomorphic models of coastal processes. We do not envision improvements in the theory of sediment transport, although some new and exciting empirical observations are probable. At the beach and nearshore scale, we predict the development of theoretically-based, two- and three-dimensional morphodynamical models that account for non-linear, time-dependent feedback processes using empirically calibrated modules. Most of the geomorphic research effort, however, will be concentrated at the scale of littoral cells. This scale is appropriate for coastal zone management because processes at this scale are manageable using traditional geomorphic techniques. At the largest scale, little advance will occur in our understanding of how coastlines evolve. Any empirical knowledge that is gained will accrue indirectly. Finally, we contend that anthropogenic influences, directly and indirectly, will be powerful forces in steering the future of Coastal Geomorphology. "If you should suddenly feel the need for a lesson in humility, try forecasting the future…" (Kleppner, 1991, p. 10).
Sorption and reemission of formaldehyde by gypsum wallboard. Report for June 1990-August 1992
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, J.C.S.
1993-01-01
The paper gives results of an analysis of the sorption and desorption of formaldehyde by unpainted wallboard, using a mass transfer model based on the Langmuir sorption isotherm. The sorption and desorption rate constants are determined by short-term experimental data. Long-term sorption and desorption curves are developed by the mass transfer model without any adjustable parameters. Compared with other empirically developed models, the mass transfer model has more extensive applicability and provides an elucidation of the sorption and desorption mechanism that empirical models cannot. The mass transfer model is also more feasible and accurate than empirical models for applications suchmore » as scale-up and exposure assessment. For a typical indoor environment, the model predicts that gypsum wallboard is a much stronger sink for formaldehyde than for other indoor air pollutants such as tetrachloroethylene and ethylbenzene. The strong sink effects are reflected by the high equilibrium capacity and slow decay of the desorption curve.« less
Overview of physical models of liquid entrainment in annular gas-liquid flow
NASA Astrophysics Data System (ADS)
Cherdantsev, Andrey V.
2018-03-01
A number of recent papers devoted to development of physically-based models for prediction of liquid entrainment in annular regime of two-phase flow are analyzed. In these models shearing-off the crests of disturbance waves by the gas drag force is supposed to be the physical mechanism of entrainment phenomenon. The models are based on a number of assumptions on wavy structure, including inception of disturbance waves due to Kelvin-Helmholtz instability, linear velocity profile inside liquid film and high degree of three-dimensionality of disturbance waves. Validity of the assumptions is analyzed by comparison to modern experimental observations. It was shown that nearly every assumption is in strong qualitative and quantitative disagreement with experiments, which leads to massive discrepancies between the modeled and real properties of the disturbance waves. As a result, such models over-predict the entrained fraction by several orders of magnitude. The discrepancy is usually reduced using various kinds of empirical corrections. This, combined with empiricism already included in the models, turns the models into another kind of empirical correlations rather than physically-based models.
Regional Models for Sediment Toxicity Assessment
This paper investigates the use of empirical models to predict the toxicity of sediment samples within a region to laboratory test organisms based on sediment chemistry. In earlier work, we used a large nationwide database of matching sediment chemistry and marine amphipod sedim...
Jet Aeroacoustics: Noise Generation Mechanism and Prediction
NASA Technical Reports Server (NTRS)
Tam, Christopher
1998-01-01
This report covers the third year research effort of the project. The research work focussed on the fine scale mixing noise of both subsonic and supersonic jets and the effects of nozzle geometry and tabs on subsonic jet noise. In publication 1, a new semi-empirical theory of jet mixing noise from fine scale turbulence is developed. By an analogy to gas kinetic theory, it is shown that the source of noise is related to the time fluctuations of the turbulence kinetic theory. On starting with the Reynolds Averaged Navier-Stokes equations, a formula for the radiated noise is derived. An empirical model of the space-time correlation function of the turbulence kinetic energy is adopted. The form of the model is in good agreement with the space-time two-point velocity correlation function measured by Davies and coworkers. The parameters of the correlation are related to the parameters of the k-epsilon turbulence model. Thus the theory is self-contained. Extensive comparisons between the computed noise spectrum of the theory and experimental measured have been carried out. The parameters include jet Mach number from 0.3 to 2.0 and temperature ratio from 1.0 to 4.8. Excellent agreements are found in the spectrum shape, noise intensity and directivity. It is envisaged that the theory would supercede all semi-empirical and totally empirical jet noise prediction methods in current use.
Zerara, Mohamed; Brickmann, Jürgen; Kretschmer, Robert; Exner, Thomas E
2009-02-01
Quantitative information of solvation and transfer free energies is often needed for the understanding of many physicochemical processes, e.g the molecular recognition phenomena, the transport and diffusion processes through biological membranes and the tertiary structure of proteins. Recently, a concept for the localization and quantification of hydrophobicity has been introduced (Jäger et al. J Chem Inf Comput Sci 43:237-247, 2003). This model is based on the assumptions that the overall hydrophobicity can be obtained as a superposition of fragment contributions. To date, all predictive models for the logP have been parameterized for n-octanol/water (logP(oct)) solvent while very few models with poor predictive abilities are available for other solvents. In this work, we propose a parameterization of an empirical model for n-octanol/water, alkane/water (logP(alk)) and cyclohexane/water (logP(cyc)) systems. Comparison of both logP(alk) and logP(cyc) with the logarithms of brain/blood ratios (logBB) for a set of structurally diverse compounds revealed a high correlation showing their superiority over the logP(oct) measure in this context.
NASA Astrophysics Data System (ADS)
Gastis, P.; Perdikakis, G.; Robertson, D.; Almus, R.; Anderson, T.; Bauder, W.; Collon, P.; Lu, W.; Ostdiek, K.; Skulski, M.
2016-04-01
Equilibrium charge state distributions of stable 60Ni, 59Co, and 63Cu beams passing through a 1 μm thick Mo foil were measured at beam energies of 1.84 MeV/u, 2.09 MeV/u, and 2.11 MeV/u respectively. A 1-D position sensitive Parallel Grid Avalanche Counter detector (PGAC) was used at the exit of a spectrograph magnet, enabling us to measure the intensity of several charge states simultaneously. The number of charge states measured for each beam constituted more than 99% of the total equilibrium charge state distribution for that element. Currently, little experimental data exists for equilibrium charge state distributions for heavy ions with 19 ≲Zp,Zt ≲ 54 (Zp and Zt, are the projectile's and target's atomic numbers respectively). Hence the success of the semi-empirical models in predicting typical characteristics of equilibrium CSDs (mean charge states and distribution widths), has not been thoroughly tested at the energy region of interest. A number of semi-empirical models from the literature were evaluated in this study, regarding their ability to reproduce the characteristics of the measured charge state distributions. The evaluated models were selected from the literature based on whether they are suitable for the given range of atomic numbers and on their frequent use by the nuclear physics community. Finally, an attempt was made to combine model predictions for the mean charge state, the distribution width and the distribution shape, to come up with a more reliable model. We discuss this new ;combinatorial; prescription and compare its results with our experimental data and with calculations using the other semi-empirical models studied in this work.
Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Clifford W.; Martin, Curtis E.
2015-08-01
We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature;more » (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.« less
NASA Astrophysics Data System (ADS)
Paouris, Evangelos; Mavromichalaki, Helen
2017-12-01
In a previous work (Paouris and Mavromichalaki in Solar Phys. 292, 30, 2017), we presented a total of 266 interplanetary coronal mass ejections (ICMEs) with as much information as possible. We developed a new empirical model for estimating the acceleration of these events in the interplanetary medium from this analysis. In this work, we present a new approach on the effective acceleration model (EAM) for predicting the arrival time of the shock that preceds a CME, using data of a total of 214 ICMEs. For the first time, the projection effects of the linear speed of CMEs are taken into account in this empirical model, which significantly improves the prediction of the arrival time of the shock. In particular, the mean value of the time difference between the observed time of the shock and the predicted time was equal to +3.03 hours with a mean absolute error (MAE) of 18.58 hours and a root mean squared error (RMSE) of 22.47 hours. After the improvement of this model, the mean value of the time difference is decreased to -0.28 hours with an MAE of 17.65 hours and an RMSE of 21.55 hours. This improved version was applied to a set of three recent Earth-directed CMEs reported in May, June, and July of 2017, and we compare our results with the values predicted by other related models.
Consumer-mediated recycling and cascading trophic interactions.
Leroux, Shawn J; Loreau, Michel
2010-07-01
Cascading trophic interactions mediated by consumers are complex phenomena, which encompass many direct and indirect effects. Nonetheless, most experiments and theory on the topic focus uniquely on the indirect, positive effects of predators on producers via regulation of herbivores. Empirical research in aquatic ecosystems, however, demonstrate that the indirect, positive effects of consumer-mediated recycling on primary producer stocks may be larger than the effects of herbivore regulation, particularly when predators have access to alternative prey. We derive an ecosystem model with both recipient- and donor-controlled trophic relationships to test the conditions of four hypotheses generated from recent empirical work on the role of consumer-mediated recycling in cascading trophic interactions. Our model predicts that predator regulation of herbivores will have larger, positive effects on producers than consumer-mediated recycling in most cases but that consumer-mediated recycling does generally have a positive effect on producer stocks. We demonstrate that herbivore recycling will have larger effects on producer biomass than predator recycling when turnover rates and recycling efficiencies are high and predators prefer local prey. In addition, predictions suggest that consumer-mediated recycling has the largest effects on primary producers when predators prefer allochthonous prey and predator attack rates are high. Finally, our model predicts that consumer-mediated recycling effects may not be largest when external nutrient loading is low. Our model predictions highlight predator and prey feeding relationships, turnover rates, and external nutrient loading rates as key determinants of the strength of cascading trophic interactions. We show that existing hypotheses from specific empirical systems do not occur under all conditions, which further exacerbates the need to consider a broad suite of mechanisms when investigating trophic cascades.
Tredennick, Andrew T.; Bentley, Lisa Patrick; Hanan, Niall P.
2013-01-01
Theoretical models of allometric scaling provide frameworks for understanding and predicting how and why the morphology and function of organisms vary with scale. It remains unclear, however, if the predictions of ‘universal’ scaling models for vascular plants hold across diverse species in variable environments. Phenomena such as competition and disturbance may drive allometric scaling relationships away from theoretical predictions based on an optimized tree. Here, we use a hierarchical Bayesian approach to calculate tree-specific, species-specific, and ‘global’ (i.e. interspecific) scaling exponents for several allometric relationships using tree- and branch-level data harvested from three savanna sites across a rainfall gradient in Mali, West Africa. We use these exponents to provide a rigorous test of three plant scaling models (Metabolic Scaling Theory (MST), Geometric Similarity, and Stress Similarity) in savanna systems. For the allometric relationships we evaluated (diameter vs. length, aboveground mass, stem mass, and leaf mass) the empirically calculated exponents broadly overlapped among species from diverse environments, except for the scaling exponents for length, which increased with tree cover and density. When we compare empirical scaling exponents to the theoretical predictions from the three models we find MST predictions are most consistent with our observed allometries. In those situations where observations are inconsistent with MST we find that departure from theory corresponds with expected tradeoffs related to disturbance and competitive interactions. We hypothesize savanna trees have greater length-scaling exponents than predicted by MST due to an evolutionary tradeoff between fire escape and optimization of mechanical stability and internal resource transport. Future research on the drivers of systematic allometric variation could reconcile the differences between observed scaling relationships in variable ecosystems and those predicted by ideal models such as MST. PMID:23484003
A study of a diffusive model of asset returns and an empirical analysis of financial markets
NASA Astrophysics Data System (ADS)
Alejandro Quinones, Angel Luis
A diffusive model for market dynamics is studied and the predictions of the model are compared to real financial markets. The model has a non-constant diffusion coefficient which depends both on the asset value and the time. A general solution for the distribution of returns is obtained and shown to match the results of computer simulations for two simple cases, piecewise linear and quadratic diffusion. The effects of discreteness in the market dynamics on the model are also studied. For the quadratic diffusion case, a type of phase transition leading to fat tails is observed as the discrete distribution approaches the continuum limit. It is also found that the model captures some of the empirical stylized facts observed in real markets, including fat-tails and scaling behavior in the distribution of returns. An analysis of empirical data for the EUR/USD currency exchange rate and the S&P 500 index is performed. Both markets show time scaling behavior consistent with a value of 1/2 for the Hurst exponent. Finally, the results show that the distribution of returns for the two markets is well fitted by the model, and the corresponding empirical diffusion coefficients are determined.
Comparison of ground motions from hybrid simulations to nga prediction equations
Star, L.M.; Stewart, J.P.; Graves, R.W.
2011-01-01
We compare simulated motions for a Mw 7.8 rupture scenario on the San Andreas Fault known as the ShakeOut event, two permutations with different hypocenter locations, and a Mw 7.15 Puente Hills blind thrust scenario, to median and dispersion predictions from empirical NGA ground motion prediction equations. We find the simulated motions attenuate faster with distance than is predicted by the NGA models for periods less than about 5.0 s After removing this distance attenuation bias, the average residuals of the simulated events (i.e., event terms) are generally within the scatter of empirical event terms, although the ShakeOut simulation appears to be a high static stress drop event. The intraevent dispersion in the simulations is lower than NGA values at short periods and abruptly increases at 1.0 s due to different simulation procedures at short and long periods. The simulated motions have a depth-dependent basin response similar to the NGA models, and also show complex effects in which stronger basin response occurs when the fault rupture transmits energy into a basin at low angle, which is not predicted by the NGA models. Rupture directivity effects are found to scale with the isochrone parameter ?? 2011, Earthquake Engineering Research Institute.
Left-right leaf asymmetry in decussate and distichous phyllotactic systems.
Martinez, Ciera C; Chitwood, Daniel H; Smith, Richard S; Sinha, Neelima R
2016-12-19
Leaves in plants with spiral phyllotaxy exhibit directional asymmetries, such that all the leaves originating from a meristem of a particular chirality are similarly asymmetric relative to each other. Models of auxin flux capable of recapitulating spiral phyllotaxis predict handed auxin asymmetries in initiating leaf primordia with empirically verifiable effects on superficially bilaterally symmetric leaves. Here, we extend a similar analysis of leaf asymmetry to decussate and distichous phyllotaxy. We found that our simulation models of these two patterns predicted mirrored asymmetries in auxin distribution in leaf primordia pairs. To empirically verify the morphological consequences of asymmetric auxin distribution, we analysed the morphology of a tomato sister-of-pin-formed1a (sopin1a) mutant, entire-2, in which spiral phyllotaxy consistently transitions to a decussate state. Shifts in the displacement of leaflets on the left and right sides of entire-2 leaf pairs mirror each other, corroborating predicted model results. We then analyse the shape of more than 800 common ivy (Hedera helix) and more than 3000 grapevine (Vitis and Ampelopsis spp.) leaf pairs and find statistical enrichment of predicted mirrored asymmetries. Our results demonstrate that left-right auxin asymmetries in models of decussate and distichous phyllotaxy successfully predict mirrored asymmetric leaf morphologies in superficially symmetric leaves.This article is part of the themed issue 'Provocative questions in left-right asymmetry'. © 2016 The Author(s).
A comprehensive mechanistic model for upward two-phase flow in wellbores
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sylvester, N.D.; Sarica, C.; Shoham, O.
1994-05-01
A comprehensive model is formulated to predict the flow behavior for upward two-phase flow. This model is composed of a model for flow-pattern prediction and a set of independent mechanistic models for predicting such flow characteristics as holdup and pressure drop in bubble, slug, and annular flow. The comprehensive model is evaluated by using a well data bank made up of 1,712 well cases covering a wide variety of field data. Model performance is also compared with six commonly used empirical correlations and the Hasan-Kabir mechanistic model. Overall model performance is in good agreement with the data. In comparison withmore » other methods, the comprehensive model performed the best.« less
Assessing uncertainty in mechanistic models
Edwin J. Green; David W. MacFarlane; Harry T. Valentine
2000-01-01
Concern over potential global change has led to increased interest in the use of mechanistic models for predicting forest growth. The rationale for this interest is that empirical models may be of limited usefulness if environmental conditions change. Intuitively, we expect that mechanistic models, grounded as far as possible in an understanding of the biology of tree...
The HEXACO and Five-Factor Models of Personality in Relation to RIASEC Vocational Interests
ERIC Educational Resources Information Center
McKay, Derek A.; Tokar, David M.
2012-01-01
The current study extended the empirical research on the overlap of vocational interests and personality by (a) testing hypothesized relations between RIASEC interests and the personality dimensions of the HEXACO model, and (b) exploring the HEXACO personality model's predictive advantage over the five-factor model (FFM) in capturing RIASEC…
NASA Astrophysics Data System (ADS)
Abbod, M. F.; Sellars, C. M.; Cizek, P.; Linkens, D. A.; Mahfouf, M.
2007-10-01
The present work describes a hybrid modeling approach developed for predicting the flow behavior, recrystallization characteristics, and crystallographic texture evolution in a Fe-30 wt pct Ni austenitic model alloy subjected to hot plane strain compression. A series of compression tests were performed at temperatures between 850 °C and 1050 °C and strain rates between 0.1 and 10 s-1. The evolution of grain structure, crystallographic texture, and dislocation substructure was characterized in detail for a deformation temperature of 950 °C and strain rates of 0.1 and 10 s-1, using electron backscatter diffraction and transmission electron microscopy. The hybrid modeling method utilizes a combination of empirical, physically-based, and neuro-fuzzy models. The flow stress is described as a function of the applied variables of strain rate and temperature using an empirical model. The recrystallization behavior is predicted from the measured microstructural state variables of internal dislocation density, subgrain size, and misorientation between subgrains using a physically-based model. The texture evolution is modeled using artificial neural networks.
The 2 × 2 Standpoints Model of Achievement Goals
Korn, Rachel M.; Elliot, Andrew J.
2016-01-01
In the present research, we proposed and tested a 2 × 2 standpoints model of achievement goals grounded in the development-demonstration and approach-avoidance distinctions. Three empirical studies are presented. Study 1 provided evidence supporting the structure and psychometric properties of a newly developed measure of the goals of the 2 × 2 standpoints model. Study 2 documented the predictive utility of these goal constructs for intrinsic motivation: development-approach and development-avoidance goals were positive predictors, and demonstration-avoidance goals were a negative predictor of intrinsic motivation. Study 3 documented the predictive utility of these goal constructs for performance attainment: Demonstration-approach goals were a positive predictor and demonstration-avoidance goals were a negative predictor of exam performance. The conceptual and empirical contributions of the present research were discussed within the broader context of existing achievement goal theory and research. PMID:27242641
Modelled drift patterns of fish larvae link coastal morphology to seabird colony distribution.
Sandvik, Hanno; Barrett, Robert T; Erikstad, Kjell Einar; Myksvoll, Mari S; Vikebø, Frode; Yoccoz, Nigel G; Anker-Nilssen, Tycho; Lorentsen, Svein-Håkon; Reiertsen, Tone K; Skarðhamar, Jofrid; Skern-Mauritzen, Mette; Systad, Geir Helge
2016-05-13
Colonial breeding is an evolutionary puzzle, as the benefits of breeding in high densities are still not fully explained. Although the dynamics of existing colonies are increasingly understood, few studies have addressed the initial formation of colonies, and empirical tests are rare. Using a high-resolution larval drift model, we here document that the distribution of seabird colonies along the Norwegian coast can be explained by variations in the availability and predictability of fish larvae. The modelled variability in concentration of fish larvae is, in turn, predicted by the topography of the continental shelf and coastline. The advection of fish larvae along the coast translates small-scale topographic characteristics into a macroecological pattern, viz. the spatial distribution of top-predator breeding sites. Our findings provide empirical corroboration of the hypothesis that seabird colonies are founded in locations that minimize travel distances between breeding and foraging locations, thereby enabling optimal foraging by central-place foragers.
Drugs and Crime: An Empirically Based, Interdisciplinary Model
ERIC Educational Resources Information Center
Quinn, James F.; Sneed, Zach
2008-01-01
This article synthesizes neuroscience findings with long-standing criminological models and data into a comprehensive explanation of the relationship between drug use and crime. The innate factors that make some people vulnerable to drug use are conceptually similar to those that predict criminality, supporting a spurious reciprocal model of the…
NASA Astrophysics Data System (ADS)
Reyer, D.; Philipp, S. L.
2014-09-01
Information about geomechanical and physical rock properties, particularly uniaxial compressive strength (UCS), are needed for geomechanical model development and updating with logging-while-drilling methods to minimise costs and risks of the drilling process. The following parameters with importance at different stages of geothermal exploitation and drilling are presented for typical sedimentary and volcanic rocks of the Northwest German Basin (NWGB): physical (P wave velocities, porosity, and bulk and grain density) and geomechanical parameters (UCS, static Young's modulus, destruction work and indirect tensile strength both perpendicular and parallel to bedding) for 35 rock samples from quarries and 14 core samples of sandstones and carbonate rocks. With regression analyses (linear- and non-linear) empirical relations are developed to predict UCS values from all other parameters. Analyses focus on sedimentary rocks and were repeated separately for clastic rock samples or carbonate rock samples as well as for outcrop samples or core samples. Empirical relations have high statistical significance for Young's modulus, tensile strength and destruction work; for physical properties, there is a wider scatter of data and prediction of UCS is less precise. For most relations, properties of core samples plot within the scatter of outcrop samples and lie within the 90% prediction bands of developed regression functions. The results indicate the applicability of empirical relations that are based on outcrop data on questions related to drilling operations when the database contains a sufficient number of samples with varying rock properties. The presented equations may help to predict UCS values for sedimentary rocks at depth, and thus develop suitable geomechanical models for the adaptation of the drilling strategy on rock mechanical conditions in the NWGB.
Empirical testing of an analytical model predicting electrical isolation of photovoltaic models
NASA Astrophysics Data System (ADS)
Garcia, A., III; Minning, C. P.; Cuddihy, E. F.
A major design requirement for photovoltaic modules is that the encapsulation system be capable of withstanding large DC potentials without electrical breakdown. Presented is a simple analytical model which can be used to estimate material thickness to meet this requirement for a candidate encapsulation system or to predict the breakdown voltage of an existing module design. A series of electrical tests to verify the model are described in detail. The results of these verification tests confirmed the utility of the analytical model for preliminary design of photovoltaic modules.
Are relationships between pollen-ovule ratio and pollen and seed size explained by sex allocation?
Burd, Martin
2011-10-01
Positive correlations between pollen-ovule ratio and seed size, and negative correlations between pollen-ovule ratio and pollen grain size have been noted frequently in a wide variety of angiosperm taxa. These relationships are commonly explained as a consequence of sex allocation on the basis of a simple model proposed by Charnov. Indeed, the theoretical expectation from the model has been the basis for interest in the empirical pattern. However, the predicted relationship is a necessary consequence of the mathematics of the model, which therefore has little explanatory power, even though its predictions are consistent with empirical results. The evolution of pollen-ovule ratios is likely to depend on selective factors affecting mating system, pollen presentation and dispensing, patterns of pollen receipt, pollen tube competition, female mate choice through embryo abortion, as well as genetic covariances among pollen, ovule, and seed size and other reproductive traits. To the extent the empirical correlations involving pollen-ovule ratios are interesting, they will need explanation in terms of a suite of selective factors. They are not explained simply by sex allocation trade-offs. © 2011 The Author(s). Evolution© 2011 The Society for the Study of Evolution.
Sensitivity analysis for simulating pesticide impacts on honey bee colonies
Background/Question/Methods Regulatory agencies assess risks to honey bees from pesticides through a tiered process that includes predictive modeling with empirical toxicity and chemical data of pesticides as a line of evidence. We evaluate the Varroapop colony model, proposed by...
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values-- that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed t...
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
Empirical algorithms for ocean optics parameters
NASA Astrophysics Data System (ADS)
Smart, Jeffrey H.
2007-06-01
As part of the Worldwide Ocean Optics Database (WOOD) Project, The Johns Hopkins University Applied Physics Laboratory has developed and evaluated a variety of empirical models that can predict ocean optical properties, such as profiles of the beam attenuation coefficient computed from profiles of the diffuse attenuation coefficient. In this paper, we briefly summarize published empirical optical algorithms and assess their accuracy for estimating derived profiles. We also provide new algorithms and discuss their applicability for deriving optical profiles based on data collected from a variety of locations, including the Yellow Sea, the Sea of Japan, and the North Atlantic Ocean. We show that the scattering coefficient (b) can be computed from the beam attenuation coefficient (c) to about 10% accuracy. The availability of such relatively accurate predictions is important in the many situations where the set of data is incomplete.
Data analysis unveils a new stylized fact in foreign currency markets
NASA Astrophysics Data System (ADS)
Nacher, J. C.; Ochiai, T.
2012-09-01
The search for stylized facts (i.e., simplified empirical facts) is of capital importance in econophysics because the stylized facts constitute the experimental empirical body on which theories and models should be tested. At the moment they are too few and this is an important limitation to the progress in the field. In this work, we unveil a new stylized fact, which consists of resistance effect and breaking-acceleration effect that implicitly requires a long memory feature in price movement. By analyzing a vast amount of historical data, we demonstrate that the financial market tends to exceed a past (historical) extreme price less often than expected by a classic short-memory model (e.g., Black-Scholes model). We call it resistance effect. However, when the market does it, we predict that the average volatility at that time point will be much higher (accelerates more). It means, in average, volatility accelerates more when the price breaks the highest (lowest) value. We refer to this as breaking-acceleration effect. These observed empirical facts are actually an effect which may arise from technical trading and psychological effects. Taken together, these results indicate that, beyond the predictive capability of this unveiled stylized fact, traditional short-memory models do not faithfully capture the market dynamics.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010.
Jacobs, David E; Nevin, Rick
2006-11-01
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 million pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels > or =10 microg/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.
Validation of a 20-year forecast of US childhood lead poisoning: Updated prospects for 2010
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, David E.; Nevin, Rick
2006-11-15
We forecast childhood lead poisoning and residential lead paint hazard prevalence for 1990-2010, based on a previously unvalidated model that combines national blood lead data with three different housing data sets. The housing data sets, which describe trends in housing demolition, rehabilitation, window replacement, and lead paint, are the American Housing Survey, the Residential Energy Consumption Survey, and the National Lead Paint Survey. Blood lead data are principally from the National Health and Nutrition Examination Survey. New data now make it possible to validate the midpoint of the forecast time period. For the year 2000, the model predicted 23.3 millionmore » pre-1960 housing units with lead paint hazards, compared to an empirical HUD estimate of 20.6 million units. Further, the model predicted 498,000 children with elevated blood lead levels (EBL) in 2000, compared to a CDC empirical estimate of 434,000. The model predictions were well within 95% confidence intervals of empirical estimates for both residential lead paint hazard and blood lead outcome measures. The model shows that window replacement explains a large part of the dramatic reduction in lead poisoning that occurred from 1990 to 2000. Here, the construction of the model is described and updated through 2010 using new data. Further declines in childhood lead poisoning are achievable, but the goal of eliminating children's blood lead levels {>=}10 {mu}g/dL by 2010 is unlikely to be achieved without additional action. A window replacement policy will yield multiple benefits of lead poisoning prevention, increased home energy efficiency, decreased power plant emissions, improved housing affordability, and other previously unrecognized benefits. Finally, combining housing and health data could be applied to forecasting other housing-related diseases and injuries.« less
NASA Astrophysics Data System (ADS)
Welling, D. T.; Manchester, W.; Savani, N.; Sokolov, I.; van der Holst, B.; Jin, M.; Toth, G.; Liemohn, M. W.; Gombosi, T. I.
2017-12-01
The future of space weather prediction depends on the community's ability to predict L1 values from observations of the solar atmosphere, which can yield hours of lead time. While both empirical and physics-based L1 forecast methods exist, it is not yet known if this nascent capability can translate to skilled dB/dt forecasts at the Earth's surface. This paper shows results for the first forecast-quality, solar-atmosphere-to-Earth's-surface dB/dt predictions. Two methods are used to predict solar wind and IMF conditions at L1 for several real-world coronal mass ejection events. The first method is an empirical and observationally based system to estimate the plasma characteristics. The magnetic field predictions are based on the Bz4Cast system which assumes that the CME has a cylindrical flux rope geometry locally around Earth's trajectory. The remaining plasma parameters of density, temperature and velocity are estimated from white-light coronagraphs via a variety of triangulation methods and forward based modelling. The second is a first-principles-based approach that combines the Eruptive Event Generator using Gibson-Low configuration (EEGGL) model with the Alfven Wave Solar Model (AWSoM). EEGGL specifies parameters for the Gibson-Low flux rope such that it erupts, driving a CME in the coronal model that reproduces coronagraph observations and propagates to 1AU. The resulting solar wind predictions are used to drive the operational Space Weather Modeling Framework (SWMF) for geospace. Following the configuration used by NOAA's Space Weather Prediction Center, this setup couples the BATS-R-US global magnetohydromagnetic model to the Rice Convection Model (RCM) ring current model and a height-integrated ionosphere electrodynamics model. The long lead time predictions of dB/dt are compared to model results that are driven by L1 solar wind observations. Both are compared to real-world observations from surface magnetometers at a variety of geomagnetic latitudes. Metrics are calculated to examine how the simulated solar wind drivers impact forecast skill. These results illustrate the current state of long-lead-time forecasting and the promise of this technology for operational use.
An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling
NASA Technical Reports Server (NTRS)
Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.
2015-01-01
In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.
An empirical approach to improving tidal predictions using recent real-time tide gauge data
NASA Astrophysics Data System (ADS)
Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry
2014-05-01
Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.
Accuracy test for link prediction in terms of similarity index: The case of WS and BA models
NASA Astrophysics Data System (ADS)
Ahn, Min-Woo; Jung, Woo-Sung
2015-07-01
Link prediction is a technique that uses the topological information in a given network to infer the missing links in it. Since past research on link prediction has primarily focused on enhancing performance for given empirical systems, negligible attention has been devoted to link prediction with regard to network models. In this paper, we thus apply link prediction to two network models: The Watts-Strogatz (WS) model and Barabási-Albert (BA) model. We attempt to gain a better understanding of the relation between accuracy and each network parameter (mean degree, the number of nodes and the rewiring probability in the WS model) through network models. Six similarity indices are used, with precision and area under the ROC curve (AUC) value as the accuracy metrics. We observe a positive correlation between mean degree and accuracy, and size independence of the AUC value.
NASA Astrophysics Data System (ADS)
Kuriyama, M.; Kumamoto, T.; Fujita, M.
2005-12-01
The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.
Use of model-predicted “transference ratios” is currently under consideration by the US EPA in the formulation of a Secondary National Ambient Air Quality Standard for oxidized nitrogen and oxidized sulfur. This term is an empirical parameter defined for oxidized sulfur (TS)as th...
Endorsing Achievement Goals Exacerbates the Big-Fish-Little-Pond Effect on Academic Self-Concept
ERIC Educational Resources Information Center
Wouters, Sofie; Colpin, Hilde; Van Damme, Jan; Verschueren, Karine
2015-01-01
The big-fish-little-pond effect (BFLPE) model predicts students' academic self-concept to be negatively predicted by the achievement level of their reference group, controlling for individual achievement. Despite an abundance of empirical evidence supporting the BFLPE, there have been relatively few studies searching for possible moderators.…
On Predictability of System Anomalies in Real World
2011-08-01
distributed system SETI @home [44]. Different from the above work, this work focuses on quantifying the predictability of real-world system anomalies. V...J.-M. Vincent, and D. Anderson, “Mining for statistical models of availability in large-scale distributed systems: An empirical study of seti @home,” in Proc. of MASCOTS, sept. 2009.
Modelling complex phenomena in optical fibres
NASA Astrophysics Data System (ADS)
Allington-Smith, Jeremy; Murray, Graham; Lemke, Ulrike
2012-09-01
We present a new model for predicting the performance of fibre systems in the multimode limit. This is based on ray--tracing but includes a semi--empirical description of Focal Ratio Degradation (FRD). We show how FRD is simulated by the model. With this ability, it can be used to investigate a wide variety of phenomena including scrambling and the loss of light close to the limiting numerical aperture. It can also be used to predict the performance of non--round and asymmetric fibres.
DFT Performance Prediction in FFTW
NASA Astrophysics Data System (ADS)
Gu, Liang; Li, Xiaoming
Fastest Fourier Transform in the West (FFTW) is an adaptive FFT library that generates highly efficient Discrete Fourier Transform (DFT) implementations. It is one of the fastest FFT libraries available and it outperforms many adaptive or hand-tuned DFT libraries. Its success largely relies on the huge search space spanned by several FFT algorithms and a set of compiler generated C code (called codelets) for small size DFTs. FFTW empirically finds the best algorithm by measuring the performance of different algorithm combinations. Although the empirical search works very well for FFTW, the search process does not explain why the best plan found performs best, and the search overhead grows polynomially as the DFT size increases. The opposite of empirical search is model-driven optimization. However, it is widely believed that model-driven optimization is inferior to empirical search and is particularly powerless to solve problems as complex as the optimization of DFT.
Wickens, Christopher D; Sebok, Angelia; Li, Huiyang; Sarter, Nadine; Gacy, Andrew M
2015-09-01
The aim of this study was to develop and validate a computational model of the automation complacency effect, as operators work on a robotic arm task, supported by three different degrees of automation. Some computational models of complacency in human-automation interaction exist, but those are formed and validated within the context of fairly simplified monitoring failures. This research extends model validation to a much more complex task, so that system designers can establish, without need for human-in-the-loop (HITL) experimentation, merits and shortcomings of different automation degrees. We developed a realistic simulation of a space-based robotic arm task that could be carried out with three different levels of trajectory visualization and execution automation support. Using this simulation, we performed HITL testing. Complacency was induced via several trials of correctly performing automation and then was assessed on trials when automation failed. Following a cognitive task analysis of the robotic arm operation, we developed a multicomponent model of the robotic operator and his or her reliance on automation, based in part on visual scanning. The comparison of model predictions with empirical results revealed that the model accurately predicted routine performance and predicted the responses to these failures after complacency developed. However, the scanning models do not account for the entire attention allocation effects of complacency. Complacency modeling can provide a useful tool for predicting the effects of different types of imperfect automation. The results from this research suggest that focus should be given to supporting situation awareness in automation development. © 2015, Human Factors and Ergonomics Society.
Pragmatic hydraulic theory predicts stomatal responses to climatic water deficits.
Sperry, John S; Wang, Yujie; Wolfe, Brett T; Mackay, D Scott; Anderegg, William R L; McDowell, Nate G; Pockman, William T
2016-11-01
Ecosystem models have difficulty predicting plant drought responses, partially from uncertainty in the stomatal response to water deficits in soil and atmosphere. We evaluate a 'supply-demand' theory for water-limited stomatal behavior that avoids the typical scaffold of empirical response functions. The premise is that canopy water demand is regulated in proportion to threat to supply posed by xylem cavitation and soil drying. The theory was implemented in a trait-based soil-plant-atmosphere model. The model predicted canopy transpiration (E), canopy diffusive conductance (G), and canopy xylem pressure (P canopy ) from soil water potential (P soil ) and vapor pressure deficit (D). Modeled responses to D and P soil were consistent with empirical response functions, but controlling parameters were hydraulic traits rather than coefficients. Maximum hydraulic and diffusive conductances and vulnerability to loss in hydraulic conductance dictated stomatal sensitivity and hence the iso- to anisohydric spectrum of regulation. The model matched wide fluctuations in G and P canopy across nine data sets from seasonally dry tropical forest and piñon-juniper woodland with < 26% mean error. Promising initial performance suggests the theory could be useful in improving ecosystem models. Better understanding of the variation in hydraulic properties along the root-stem-leaf continuum will simplify parameterization. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1994-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading about 800 C, these fibers display creep related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of a mechanism-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the Bend Stress Relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model, but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model tensile creep predictions based on the BSR test results with the literature data show good agreement, supporting both the predictive capability of the model and the use of the BSR text as a simple method for parameter determination for other fibers.
NASA Astrophysics Data System (ADS)
Koga-Vicente, A.; Friedel, M. J.
2010-12-01
Every year thousands of people are affected by floods and landslide hazards caused by rainstorms. The problem is more serious in tropical developing countries because of the susceptibility as a result of the high amount of available energy to form storms, and the high vulnerability due to poor economic and social conditions. Predictive models of hazards are important tools to manage this kind of risk. In this study, a comparison of two different modeling approaches was made for predicting hydrometeorological hazards in 12 cities on the coast of São Paulo, Brazil, from 1994 to 2003. In the first approach, an empirical multiple linear regression (MLR) model was developed and used; the second approach used a type of unsupervised nonlinear artificial neural network called a self-organized map (SOM). By using twenty three independent variables of susceptibility (precipitation, soil type, slope, elevation, and regional atmospheric system scale) and vulnerability (distribution and total population, income and educational characteristics, poverty intensity, human development index), binary hazard responses were obtained. Model performance by cross-validation indicated that the respective MLR and SOM model accuracy was about 67% and 80%. Prediction accuracy can be improved by the addition of information, but the SOM approach is preferred because of sparse data and highly nonlinear relations among the independent variables.
Creep and stress relaxation modeling of polycrystalline ceramic fibers
NASA Technical Reports Server (NTRS)
Dicarlo, James A.; Morscher, Gregory N.
1991-01-01
A variety of high performance polycrystalline ceramic fibers are currently being considered as reinforcement for high temperature ceramic matrix composites. However, under mechanical loading above 800 C, these fibers display creep-related instabilities which can result in detrimental changes in composite dimensions, strength, and internal stress distributions. As a first step toward understanding these effects, this study examines the validity of mechanistic-based empirical model which describes primary stage tensile creep and stress relaxation of polycrystalline ceramic fibers as independent functions of time, temperature, and applied stress or strain. To verify these functional dependencies, a simple bend test is used to measure stress relaxation for four types of commercial ceramic fibers for which direct tensile creep data are available. These fibers include both nonoxide (SCS-6, Nicalon) and oxide (PRD-166, FP) compositions. The results of the bend stress relaxation (BSR) test not only confirm the stress, time, and temperature dependencies predicted by the model but also allow measurement of model empirical parameters for the four fiber types. In addition, comparison of model predictions and BSR test results with the literature tensile creep data show good agreement, supporting both the predictive capability of the model and the use of the BSR test as a simple method for parameter determination for other fibers.
Empirical equation for predicting the surface tension of some liquid metals at their melting point
NASA Astrophysics Data System (ADS)
Ceotto, D.
2014-07-01
A new empirical equation is proposed for predicting the surface tension of some pure metals at their melting point. The investigation has been conducted adopting a statistical approach using some of the most accredited data available in literature. It is found that for Ag, Al, Au, Co, Cu, Fe, Ni, and Pb the surface tension can be conveniently expressed in function of the latent heat of fusion and of the geometrical parameters of an ideal liquid spherical drop. The equation proposed has been compared also with the model proposed by Lu and Jiang giving satisfactory agreement for the metals considered.
Recent solar extreme ultraviolet irradiance observations and modeling: A review
NASA Technical Reports Server (NTRS)
Tobiska, W. Kent
1993-01-01
For more than 90 years, solar extreme ultraviolet (EUV) irradiance modeling has progressed from empirical blackbody radiation formulations, through fudge factors, to typically measured irradiances and reference spectra was well as time-dependent empirical models representing continua and line emissions. A summary of recent EUV measurements by five rockets and three satellites during the 1980s is presented along with the major modeling efforts. The most significant reference spectra are reviewed and threee independently derived empirical models are described. These include Hinteregger's 1981 SERF1, Nusinov's 1984 two-component, and Tobiska's 1990/1991/SERF2/EUV91 flux models. They each provide daily full-disk broad spectrum flux values from 2 to 105 nm at 1 AU. All the models depend to one degree or another on the long time series of the Atmosphere Explorer E (AE-E) EUV database. Each model uses ground- and/or space-based proxies to create emissions from solar atmospheric regions. Future challenges in EUV modeling are summarized including the basic requirements of models, the task of incorporating new observations and theory into the models, the task of comparing models with solar-terrestrial data sets, and long-term goals and modeling objectives. By the late 1990s, empirical models will potentially be improved through the use of proposed solar EUV irradiance measurements and images at selected wavelengths that will greatly enhance modeling and predictive capabilities.
Stall flutter analysis of propfans
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.
1988-01-01
Three semi-empirical aerodynamic stall models are compared with respect to their lift and moment hysteresis loop prediction, limit cycle behavior, easy implementation, and feasibility in developing the parameters required for stall flutter prediction of advanced turbines. For the comparison of aeroelastic response prediction including stall, a typical section model and a plate structural model are considered. The response analysis includes both plunging and pitching motions of the blades. In model A, a correction of the angle of attack is applied when the angle of attack exceeds the static stall angle. In model B, a synthesis procedure is used for angles of attack above static stall angles, and the time history effects are accounted for through the Wagner function.
Mixing and unmixedness in plasma jets 1: Near-field analysis
NASA Technical Reports Server (NTRS)
Ilegbusi, Olusegun J.
1993-01-01
The flow characteristics in the near-field of a plasma jet are simulated with a two-fluid model. This model accounts for both gradient-diffusion mixing and uni-directional sifting motion resulting from pressure-gradient-body-force imbalance. This latter mechanism is believed to be responsible for the umixedness observed in plasma jets. The unmixedness is considered to be essentially a Rayleigh-Taylor kind instability. Transport equations are solved for the individual plasma and ambient gas velocities, temperatures and volume fractions. Empirical relations are employed for the interface transfers of mass, momentum and heat. The empirical coefficients are first established by comparison of predictions with available experimental data for shear flows. The model is then applied to an Argon plasma jet ejecting into stagnant air. The predicted results show the significant build-up of unmixed air within the plasma gas, even relatively far downstream of the torch. By adjusting the inlet condition, the model adequately reproduces the experimental data.
Prediction of Particle Concentration using Traffic Emission Model
NASA Astrophysics Data System (ADS)
He, Hong-di; Lu, Jane Wei-zhen
2010-05-01
Vehicle emission is regarded as one of major sources of air pollution in urban area. Much attention has been addressed on it especially at traffic intersection. At intersection, vehicles frequently stop with idling engine during the red time and speed-up rapidly in the green time, which result in a high velocity fluctuation and produce extra pollutants to the surrounding air. To deeply understand such process, a semi-empirical model for predicting the changing effect of traffic flow patterns on particulate concentrations is proposed. The performance of the model is evaluated using the correlation coefficient and other parameters. From the results, the correlation coefficients in morning and afternoon data were found to be 0.86 an 0.73 respectively, which implies that the semi-empirical model for morning and afternoon data are 86% and 73% error free. Due to less affected by possible factors such as traffic volume and movement of pedestrian, the dispersion of the particulate matter in the morning is smaller and then contributes to higher performance than that in the afternoon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wosnik, Martin; Bachant, Pete; Neary, Vincent Sinclair
CACTUS, developed by Sandia National Laboratories, is an open-source code for the design and analysis of wind and hydrokinetic turbines. While it has undergone extensive validation for both vertical axis and horizontal axis wind turbines, and it has been demonstrated to accurately predict the performance of horizontal (axial-flow) hydrokinetic turbines, its ability to predict the performance of crossflow hydrokinetic turbines has yet to be tested. The present study addresses this problem by comparing the predicted performance curves derived from CACTUS simulations of the U.S. Department of Energy’s 1:6 scale reference model crossflow turbine to those derived by experimental measurements inmore » a tow tank using the same model turbine at the University of New Hampshire. It shows that CACTUS cannot accurately predict the performance of this crossflow turbine, raising concerns on its application to crossflow hydrokinetic turbines generally. The lack of quality data on NACA 0021 foil aerodynamic (hydrodynamic) characteristics over the wide range of angles of attack (AoA) and Reynolds numbers is identified as the main cause for poor model prediction. A comparison of several different NACA 0021 foil data sources, derived using both physical and numerical modeling experiments, indicates significant discrepancies at the high AoA experienced by foils on crossflow turbines. Users of CACTUS for crossflow hydrokinetic turbines are, therefore, advised to limit its application to higher tip speed ratios (lower AoA), and to carefully verify the reliability and accuracy of their foil data. Accurate empirical data on the aerodynamic characteristics of the foil is the greatest limitation to predicting performance for crossflow turbines with semi-empirical models like CACTUS. Future improvements of CACTUS for crossflow turbine performance prediction will require the development of accurate foil aerodynamic characteristic data sets within the appropriate ranges of Reynolds numbers and AoA.« less
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lillibridge, Sean; Navarro, Moses
2011-01-01
Freezable radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recovering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TradeMark) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested, namely MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Six-Tube Freezable Radiator Testing and Model Correlation
NASA Technical Reports Server (NTRS)
Lilibridge, Sean T.; Navarro, Moses
2012-01-01
Freezable Radiators offer an attractive solution to the issue of thermal control system scalability. As thermal environments change, a freezable radiator will effectively scale the total heat rejection it is capable of as a function of the thermal environment and flow rate through the radiator. Scalable thermal control systems are a critical technology for spacecraft that will endure missions with widely varying thermal requirements. These changing requirements are a result of the spacecraft?s surroundings and because of different thermal loads rejected during different mission phases. However, freezing and thawing (recov ering) a freezable radiator is a process that has historically proven very difficult to predict through modeling, resulting in highly inaccurate predictions of recovery time. These predictions are a critical step in gaining the capability to quickly design and produce optimized freezable radiators for a range of mission requirements. This paper builds upon previous efforts made to correlate a Thermal Desktop(TM) model with empirical testing data from two test articles, with additional model modifications and empirical data from a sub-component radiator for a full scale design. Two working fluids were tested: MultiTherm WB-58 and a 50-50 mixture of DI water and Amsoil ANT.
Simple estimate of entrainment rate of pollutants from a coastal discharge into the surf zone.
Wong, Simon H C; Monismith, Stephen G; Boehm, Alexandria B
2013-10-15
Microbial pollutants from coastal discharges can increase illness risks for swimmers and cause beach advisories. There is presently no predictive model for estimating the entrainment of pollution from coastal discharges into the surf zone. We present a novel, quantitative framework for estimating surf zone entrainment of pollution at a wave-dominant open beach. Using physical arguments, we identify a dimensionless parameter equal to the quotient of the surf zone width l(sz) and the cross-flow length scale of the discharge la = M(j) (1/2)/U(sz), where M(j) is the discharge's momentum flux and U(sz) is a representative alongshore velocity in the surf zone. We conducted numerical modeling of a nonbuoyant discharge at an alongshore uniform beach with constant slope using a wave-resolving hydrodynamic model. Using results from 144 numerical experiments we develop an empirical relationship between the surf zone entrainment rate α and l(sz)/(la). The empirical relationship can reasonably explain seven measurements of surf zone entrainment at three diverse coastal discharges. This predictive relationship can be a useful tool in coastal water quality management and can be used to develop predictive beach water quality models.
Gyenge, Christina C; Bowen, Bruce D; Reed, Rolf K; Bert, Joel L
2003-02-01
This study is concerned with the formulation of a 'kidney module' linked to the plasma compartment of a larger mathematical model previously developed. Combined, these models can be used to predict, amongst other things, fluid and small ion excretion rates by the kidney; information that should prove useful in evaluating values and trends related to whole-body fluid balance for different clinical conditions to establish fluid administration protocols and for educational purposes. The renal module assumes first-order, negative-feedback responses of the kidney to changes in plasma volume and/or plasma sodium content from their normal physiological set points. Direct hormonal influences are not explicitly formulated in this empiric model. The model also considers that the renal excretion rates of small ions other than sodium are proportional to the excretion rate of sodium. As part of the model development two aspects are emphasized (1): the estimation of parameters related to the renal elimination of fluid and small ions, and (2) model validation via comparisons between the model predictions and selected experimental data. For validation, model predictions of the renal dynamics are compared with new experimental data for two cases: plasma overload resulting from external fluid infusion (e.g. infusions of iso-osmolar solutions and/or hypertonic/hyperoncotic saline solutions), and untreated hypo volemic conditions that result from the external loss of blood. The present study demonstrates that the empiric kidney module presented above can provide good short-term predictions with respect to all renal outputs considered here. Physiological implications of the model are also presented. Copyright Acta Anaesthesiologica Scandinavica 47 (2003)
Bayesian model reduction and empirical Bayes for group (DCM) studies.
Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter
2016-03-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Is Directivity Still Effective in a PSHA Framework?
NASA Astrophysics Data System (ADS)
Spagnuolo, E.; Herrero, A.; Cultrera, G.
2008-12-01
Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.
Wake Vortex Prediction Models for Decay and Transport Within Stratified Environments
NASA Astrophysics Data System (ADS)
Switzer, George F.; Proctor, Fred H.
2002-01-01
This paper proposes two simple models to predict vortex transport and decay. The models are determined empirically from results of three-dimensional large eddy simulations, and are applicable to wake vortices out of ground effect and not subjected to environmental winds. The results, from the large eddy simulations assume a range of ambient turbulence and stratification levels. The models and the results from the large eddy simulations support the hypothesis that the decay of the vortex hazard is decoupled from its change in descent rate.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
Couture, Aaron Joseph; Casten, Richard F.; Cakirli, R. B.
2017-12-20
Here, neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40%, and has limited predictive power, with predictions from different models rapidly differing by an order ofmore » magnitude a few nucleons from the last measurement.« less
Chopp-Hurley, Jaclyn N; Brookham, Rebecca L; Dickerson, Clark R
2016-12-01
Biomechanical models are often used to estimate the muscular demands of various activities. However, specific muscle dysfunctions typical of unique clinical populations are rarely considered. Due to iatrogenic tissue damage, pectoralis major capability is markedly reduced in breast cancer population survivors, which could influence arm internal and external rotation muscular strategies. Accordingly, an optimization-based muscle force prediction model was systematically modified to emulate breast cancer population survivors through adjusting pectoralis capability and enforcing an empirical muscular co-activation relationship. Model permutations were evaluated through comparisons between predicted muscle forces and empirically measured muscle activations in survivors. Similarities between empirical data and model outputs were influenced by muscle type, hand force, pectoralis major capability and co-activation constraints. Differences in magnitude were lower when the co-activation constraint was enforced (-18.4% [31.9]) than unenforced (-23.5% [27.6]) (p<0.0001). This research demonstrates that muscle dysfunction in breast cancer population survivors can be reflected through including a capability constraint for pectoralis major. Further refinement of the co-activation constraint for survivors could improve its generalizability across this population and activities. Improving biomechanical models to more accurately represent clinical populations can provide novel information that can help in the development of optimal treatment programs for breast cancer population survivors. Copyright © 2016 Elsevier Ltd. All rights reserved.
AAA gunnermodel based on observer theory. [predicting a gunner's tracking response
NASA Technical Reports Server (NTRS)
Kou, R. S.; Glass, B. C.; Day, C. N.; Vikmanis, M. M.
1978-01-01
The Luenberger observer theory is used to develop a predictive model of a gunner's tracking response in antiaircraft artillery systems. This model is composed of an observer, a feedback controller and a remnant element. An important feature of the model is that the structure is simple, hence a computer simulation requires only a short execution time. A parameter identification program based on the least squares curve fitting method and the Gauss Newton gradient algorithm is developed to determine the parameter values of the gunner model. Thus, a systematic procedure exists for identifying model parameters for a given antiaircraft tracking task. Model predictions of tracking errors are compared with human tracking data obtained from manned simulation experiments. Model predictions are in excellent agreement with the empirical data for several flyby and maneuvering target trajectories.
Language acquisition is model-based rather than model-free.
Wang, Felix Hao; Mintz, Toben H
2016-01-01
Christiansen & Chater (C&C) propose that learning language is learning to process language. However, we believe that the general-purpose prediction mechanism they propose is insufficient to account for many phenomena in language acquisition. We argue from theoretical considerations and empirical evidence that many acquisition tasks are model-based, and that different acquisition tasks require different, specialized models.
Empirical Investigation of Critical Transitions in Paleoclimate
NASA Astrophysics Data System (ADS)
Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.
2016-12-01
In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1
A Global Model for Bankruptcy Prediction
Alaminos, David; del Castillo, Agustín; Fernández, Manuel Ángel
2016-01-01
The recent world financial crisis has increased the number of bankruptcies in numerous countries and has resulted in a new area of research which responds to the need to predict this phenomenon, not only at the level of individual countries, but also at a global level, offering explanations of the common characteristics shared by the affected companies. Nevertheless, few studies focus on the prediction of bankruptcies globally. In order to compensate for this lack of empirical literature, this study has used a methodological framework of logistic regression to construct predictive bankruptcy models for Asia, Europe and America, and other global models for the whole world. The objective is to construct a global model with a high capacity for predicting bankruptcy in any region of the world. The results obtained have allowed us to confirm the superiority of the global model in comparison to regional models over periods of up to three years prior to bankruptcy. PMID:27880810
Developing an Adequately Specified Model of State Level Student Achievement with Multilevel Data.
ERIC Educational Resources Information Center
Bernstein, Lawrence
Limitations of using linear, unilevel regression procedures in modeling student achievement are discussed. This study is a part of a broader study that is developing an empirically-based predictive model of variables associated with academic achievement from a multilevel perspective and examining the differences by which parameters are estimated…
Social Capital, Social Control, and Changes in Victimization Rates
ERIC Educational Resources Information Center
Hawdon, James; Ryan, John
2009-01-01
A neighborhood-level model of crime that connects the central dimensions of social capital with specific forms of social control is developed. The proposed model is tested using a structural equation model that predicts changes in empirical Bayes log odds of neighborhood victimization rates between 2000 and 2001 in 41 neighborhoods in South…
Empirical models for use in designing decompression procedures for space operations
NASA Technical Reports Server (NTRS)
Conkin, Johnny; Edwards, Benjamin F.; Waligora, James M.; Horrigan, David J., Jr.
1987-01-01
Empirical models for predicting the incidence of Type 1 altitude decompression sickness (DCS) and venous gas emboli (VGE) during space extravehicular activity (EVA), and for use in designing safe denitrogenation decompression procedures are developed. The models are parameterized using DCS and VGE incidence data from NASA and USAF manned altitude chamber decompression tests using 607 male and female subject tests. These models, and procedures for their use, consist of: (1) an exponential relaxation model and procedure for computing tissue nitrogen partial pressure resulting from a specified prebreathing and stepped decompression sequence; (2) a formula for calculating Tissue Ratio (TR), a tissue decompression stress index; (3) linear and Hill equation models for predicting the total incidence of VGE and DCS attendant with a particular TR; (4) graphs of cumulative DCS and VGE incidence (risk) versus EVA exposure time at any specified TR; and (5) two equations for calculating the average delay period for the initial detection of VGE or indication of Type 1 DCS in a group after a specific denitrogenation decompression procedure. Several examples of realistic EVA preparations are provided.
Livestock Helminths in a Changing Climate: Approaches and Restrictions to Meaningful Predictions
Fox, Naomi J.; Marion, Glenn; Davidson, Ross S.; White, Piran C. L.; Hutchings, Michael R.
2012-01-01
Simple Summary Parasitic helminths represent one of the most pervasive challenges to livestock, and their intensity and distribution will be influenced by climate change. There is a need for long-term predictions to identify potential risks and highlight opportunities for control. We explore the approaches to modelling future helminth risk to livestock under climate change. One of the limitations to model creation is the lack of purpose driven data collection. We also conclude that models need to include a broad view of the livestock system to generate meaningful predictions. Abstract Climate change is a driving force for livestock parasite risk. This is especially true for helminths including the nematodes Haemonchus contortus, Teladorsagia circumcincta, Nematodirus battus, and the trematode Fasciola hepatica, since survival and development of free-living stages is chiefly affected by temperature and moisture. The paucity of long term predictions of helminth risk under climate change has driven us to explore optimal modelling approaches and identify current bottlenecks to generating meaningful predictions. We classify approaches as correlative or mechanistic, exploring their strengths and limitations. Climate is one aspect of a complex system and, at the farm level, husbandry has a dominant influence on helminth transmission. Continuing environmental change will necessitate the adoption of mitigation and adaptation strategies in husbandry. Long term predictive models need to have the architecture to incorporate these changes. Ultimately, an optimal modelling approach is likely to combine mechanistic processes and physiological thresholds with correlative bioclimatic modelling, incorporating changes in livestock husbandry and disease control. Irrespective of approach, the principal limitation to parasite predictions is the availability of active surveillance data and empirical data on physiological responses to climate variables. By combining improved empirical data and refined models with a broad view of the livestock system, robust projections of helminth risk can be developed. PMID:26486780
“Feature Detection” vs. “Predictive Coding” Models of Plant Behavior
Calvo, Paco; Baluška, František; Sims, Andrew
2016-01-01
In this article we consider the possibility that plants exhibit anticipatory behavior, a mark of intelligence. If plants are able to anticipate and respond accordingly to varying states of their surroundings, as opposed to merely responding online to environmental contingencies, then such capacity may be in principle testable, and subject to empirical scrutiny. Our main thesis is that adaptive behavior can only take place by way of a mechanism that predicts the environmental sources of sensory stimulation. We propose to test for anticipation in plants experimentally by contrasting two empirical hypotheses: “feature detection” and “predictive coding.” We spell out what these contrasting hypotheses consist of by way of illustration from the animal literature, and consider how to transfer the rationale involved to the plant literature. PMID:27757094
Peak-summer East Asian rainfall predictability and prediction part II: extratropical East Asia
NASA Astrophysics Data System (ADS)
Yim, So-Young; Wang, Bin; Xing, Wen
2016-07-01
The part II of the present study focuses on northern East Asia (NEA: 26°N-50°N, 100°-140°E), exploring the source and limit of the predictability of the peak summer (July-August) rainfall. Prediction of NEA peak summer rainfall is extremely challenging because of the exposure of the NEA to midlatitude influence. By examining four coupled climate models' multi-model ensemble (MME) hindcast during 1979-2010, we found that the domain-averaged MME temporal correlation coefficient (TCC) skill is only 0.13. It is unclear whether the dynamical models' poor skills are due to limited predictability of the peak-summer NEA rainfall. In the present study we attempted to address this issue by applying predictable mode analysis method using 35-year observations (1979-2013). Four empirical orthogonal modes of variability and associated major potential sources of variability are identified: (a) an equatorial western Pacific (EWP)-NEA teleconnection driven by EWP sea surface temperature (SST) anomalies, (b) a western Pacific subtropical high and Indo-Pacific dipole SST feedback mode, (c) a central Pacific-El Nino-Southern Oscillation mode, and (d) a Eurasian wave train pattern. Physically meaningful predictors for each principal component (PC) were selected based on analysis of the lead-lag correlations with the persistent and tendency fields of SST and sea-level pressure from March to June. A suite of physical-empirical (P-E) models is established to predict the four leading PCs. The peak summer rainfall anomaly pattern is then objectively predicted by using the predicted PCs and the corresponding observed spatial patterns. A 35-year cross-validated hindcast over the NEA yields a domain-averaged TCC skill of 0.36, which is significantly higher than the MME dynamical hindcast (0.13). The estimated maximum potential attainable TCC skill averaged over the entire domain is around 0.61, suggesting that the current dynamical prediction models may have large rooms to improve. Limitations and future work are also discussed.
Sun, Baozhou; Lam, Dao; Yang, Deshan; Grantham, Kevin; Zhang, Tiezhi; Mutic, Sasa; Zhao, Tianyu
2018-05-01
Clinical treatment planning systems for proton therapy currently do not calculate monitor units (MUs) in passive scatter proton therapy due to the complexity of the beam delivery systems. Physical phantom measurements are commonly employed to determine the field-specific output factors (OFs) but are often subject to limited machine time, measurement uncertainties and intensive labor. In this study, a machine learning-based approach was developed to predict output (cGy/MU) and derive MUs, incorporating the dependencies on gantry angle and field size for a single-room proton therapy system. The goal of this study was to develop a secondary check tool for OF measurements and eventually eliminate patient-specific OF measurements. The OFs of 1754 fields previously measured in a water phantom with calibrated ionization chambers and electrometers for patient-specific fields with various range and modulation width combinations for 23 options were included in this study. The training data sets for machine learning models in three different methods (Random Forest, XGBoost and Cubist) included 1431 (~81%) OFs. Ten-fold cross-validation was used to prevent "overfitting" and to validate each model. The remaining 323 (~19%) OFs were used to test the trained models. The difference between the measured and predicted values from machine learning models was analyzed. Model prediction accuracy was also compared with that of the semi-empirical model developed by Kooy (Phys. Med. Biol. 50, 2005). Additionally, gantry angle dependence of OFs was measured for three groups of options categorized on the selection of the second scatters. Field size dependence of OFs was investigated for the measurements with and without patient-specific apertures. All three machine learning methods showed higher accuracy than the semi-empirical model which shows considerably large discrepancy of up to 7.7% for the treatment fields with full range and full modulation width. The Cubist-based solution outperformed all other models (P < 0.001) with the mean absolute discrepancy of 0.62% and maximum discrepancy of 3.17% between the measured and predicted OFs. The OFs showed a small dependence on gantry angle for small and deep options while they were constant for large options. The OF decreased by 3%-4% as the field radius was reduced to 2.5 cm. Machine learning methods can be used to predict OF for double-scatter proton machines with greater prediction accuracy than the most popular semi-empirical prediction model. By incorporating the gantry angle dependence and field size dependence, the machine learning-based methods can be used for a sanity check of OF measurements and bears the potential to eliminate the time-consuming patient-specific OF measurements. © 2018 American Association of Physicists in Medicine.
Prediction of an Apparent Flame Length in a Co-Axial Jet Diffusion Flame Combustor.
1983-04-01
This report is comprised of two parts. In Part I a predictive model for an apparent flame length in a co-axial jet diffusion flame combustor is...Overall mass transfer coefficient, evaluated from an empirically developed correlation, is employed to predict total flame length . Comparison of the...experimental and predicted data on total flame length shows a reasonable agreement within sixteen percent over the investigated air and fuel flow rate
Absolute, SI-traceable lunar irradiance tie-points for the USGS Lunar Model
NASA Astrophysics Data System (ADS)
Brown, Steven W.; Eplee, Robert E.; Xiong, Xiaoxiong J.
2017-10-01
The United States Geological Survey (USGS) has developed an empirical model, known as the Robotic Lunar Observatory (ROLO) Model, that predicts the reflectance of the Moon for any Sun-sensor-Moon configuration over the spectral range from 350 nm to 2500 nm. The lunar irradiance can be predicted from the modeled lunar reflectance using a spectrum of the incident solar irradiance. While extremely successful as a relative exo-atmospheric calibration target, the ROLO Model is not SI-traceable and has estimated uncertainties too large for the Moon to be used as an absolute celestial calibration target. In this work, two recent absolute, low uncertainty, SI-traceable top-of-the-atmosphere (TOA) lunar irradiances, measured over the spectral range from 380 nm to 1040 nm, at lunar phase angles of 6.6° and 16.9° , are used as tie-points to the output of the ROLO Model. Combined with empirically derived phase and libration corrections to the output of the ROLO Model and uncertainty estimates in those corrections, the measurements enable development of a corrected TOA lunar irradiance model and its uncertainty budget for phase angles between +/-80° and libration angles from 7° to 51° . The uncertainties in the empirically corrected output from the ROLO model are approximately 1 % from 440 nm to 865 nm and increase to almost 3 % at 412 nm. The dominant components in the uncertainty budget are the uncertainty in the absolute TOA lunar irradiance and the uncertainty in the fit to the phase correction from the output of the ROLO model.
Risky forward interest rates and swaptions: Quantum finance model and empirical results
NASA Astrophysics Data System (ADS)
Baaquie, Belal Ehsan; Yu, Miao; Bhanap, Jitendra
2018-02-01
Risk free forward interest rates (Diebold and Li, 2006 [1]; Jamshidian, 1991 [2 ]) - and their realization by US Treasury bonds as the leading exemplar - have been studied extensively. In Baaquie (2010), models of risk free bonds and their forward interest rates based on the quantum field theoretic formulation of the risk free forward interest rates have been discussed, including the empirical evidence supporting these models. The quantum finance formulation of risk free forward interest rates is extended to the case of risky forward interest rates. The examples of the Singapore and Malaysian forward interest rates are used as specific cases. The main feature of the quantum finance model is that the risky forward interest rates are modeled both a) as a stand-alone case as well as b) being driven by the US forward interest rates plus a spread - having its own term structure -above the US forward interest rates. Both the US forward interest rates and the term structure for the spread are modeled by a two dimensional Euclidean quantum field. As a precursor to the evaluation of put option of the Singapore coupon bond, the quantum finance model for swaptions is tested using empirical study of swaptions for the US Dollar -showing that the model is quite accurate. A prediction for the market price of the put option for the Singapore coupon bonds is obtained. The quantum finance model is generalized to study the Malaysian case and the Malaysian forward interest rates are shown to have anomalies absent for the US and Singapore case. The model's prediction for a Malaysian interest rate swap is obtained.
Mathematical models for predicting the transport and fate of pollutants in the environment require reactivity parameter values that is value of the physical and chemical constants that govern reactivity. Although empirical structure activity relationships have been developed th...
ERIC Educational Resources Information Center
Trostel, Philip; Walker, Ian
2006-01-01
This paper examines the relationship between the incentives to work and to invest in human capital through education in a lifecycle optimizing model. These incentives are shown to be mutually reinforcing in a simple stylized model. This theoretical prediction is investigated empirically using three large micro datasets covering a broad range of…
Artifact interactions retard technological improvement: An empirical study
Magee, Christopher L.
2017-01-01
Empirical research has shown performance improvement of many different technological domains occurs exponentially but with widely varying improvement rates. What causes some technologies to improve faster than others do? Previous quantitative modeling research has identified artifact interactions, where a design change in one component influences others, as an important determinant of improvement rates. The models predict that improvement rate for a domain is proportional to the inverse of the domain’s interaction parameter. However, no empirical research has previously studied and tested the dependence of improvement rates on artifact interactions. A challenge to testing the dependence is that any method for measuring interactions has to be applicable to a wide variety of technologies. Here we propose a novel patent-based method that is both technology domain-agnostic and less costly than alternative methods. We use textual content from patent sets in 27 domains to find the influence of interactions on improvement rates. Qualitative analysis identified six specific keywords that signal artifact interactions. Patent sets from each domain were then examined to determine the total count of these 6 keywords in each domain, giving an estimate of artifact interactions in each domain. It is found that improvement rates are positively correlated with the inverse of the total count of keywords with Pearson correlation coefficient of +0.56 with a p-value of 0.002. The results agree with model predictions, and provide, for the first time, empirical evidence that artifact interactions have a retarding effect on improvement rates of technological domains. PMID:28777798
Integrating animal movement with habitat suitability for estimating dynamic landscape connectivity
van Toor, Mariëlle L.; Kranstauber, Bart; Newman, Scott H.; Prosser, Diann J.; Takekawa, John Y.; Technitis, Georgios; Weibel, Robert; Wikelski, Martin; Safi, Kamran
2018-01-01
Context High-resolution animal movement data are becoming increasingly available, yet having a multitude of empirical trajectories alone does not allow us to easily predict animal movement. To answer ecological and evolutionary questions at a population level, quantitative estimates of a species’ potential to link patches or populations are of importance. Objectives We introduce an approach that combines movement-informed simulated trajectories with an environment-informed estimate of the trajectories’ plausibility to derive connectivity. Using the example of bar-headed geese we estimated migratory connectivity at a landscape level throughout the annual cycle in their native range. Methods We used tracking data of bar-headed geese to develop a multi-state movement model and to estimate temporally explicit habitat suitability within the species’ range. We simulated migratory movements between range fragments, and calculated a measure we called route viability. The results are compared to expectations derived from published literature. Results Simulated migrations matched empirical trajectories in key characteristics such as stopover duration. The viability of the simulated trajectories was similar to that of the empirical trajectories. We found that, overall, the migratory connectivity was higher within the breeding than in wintering areas, corroborating previous findings for this species. Conclusions We show how empirical tracking data and environmental information can be fused for meaningful predictions of animal movements throughout the year and even outside the spatial range of the available data. Beyond predicting migratory connectivity, our framework will prove useful for modelling ecological processes facilitated by animal movement, such as seed dispersal or disease ecology.
How predictable is the anomaly pattern of the Indian summer rainfall?
NASA Astrophysics Data System (ADS)
Li, Juan; Wang, Bin
2016-05-01
Century-long efforts have been devoted to seasonal forecast of Indian summer monsoon rainfall (ISMR). Most studies of seasonal forecast so far have focused on predicting the total amount of summer rainfall averaged over the entire India (i.e., all Indian rainfall index-AIRI). However, it is practically more useful to forecast anomalous seasonal rainfall distribution (anomaly pattern) across India. The unknown science question is to what extent the anomalous rainfall pattern is predictable. This study attempted to address this question. Assessment of the 46-year (1960-2005) hindcast made by the five state-of-the-art ENSEMBLE coupled dynamic models' multi-model ensemble (MME) prediction reveals that the temporal correlation coefficient (TCC) skill for prediction of AIRI is 0.43, while the area averaged TCC skill for prediction of anomalous rainfall pattern is only 0.16. The present study aims to estimate the predictability of ISMR on regional scales by using Predictable Mode Analysis method and to develop a set of physics-based empirical (P-E) models for prediction of ISMR anomaly pattern. We show that the first three observed empirical orthogonal function (EOF) patterns of the ISMR have their distinct dynamical origins rooted in an eastern Pacific-type La Nina, a central Pacific-type La Nina, and a cooling center near dateline, respectively. These equatorial Pacific sea surface temperature anomalies, while located in different longitudes, can all set up a specific teleconnection pattern that affects Indian monsoon and results in different rainfall EOF patterns. Furthermore, the dynamical models' skill for predicting ISMR distribution primarily comes primarily from these three modes. Therefore, these modes can be regarded as potentially predictable modes. If these modes are perfectly predicted, about 51 % of the total observed variability is potentially predictable. Based on understanding the lead-lag relationships between the lower boundary anomalies and the predictable modes, a set of P-E models is established to predict the principal component of each predictable mode, so that the ISMR anomaly pattern can be predicted by using the sum of the predictable modes. Three validation schemes are used to assess the performance of the P-E models' hindcast and independent forecast. The validated TCC skills of the P-E model here are more than doubled that of dynamical models' MME hindcast, suggesting a large room for improvement of the current dynamical prediction. The methodology proposed here can be applied to a wide range of climate prediction and predictability studies. The limitation and future improvement are also discussed.
NASA Astrophysics Data System (ADS)
Naif, Samer
2018-01-01
Electrical conductivity soundings provide important constraints on the thermal and hydration state of the mantle. Recent seafloor magnetotelluric surveys have imaged the electrical conductivity structure of the oceanic upper mantle over a variety of plate ages. All regions show high conductivity (0.02 to 0.2 S/m) at 50 to 150 km depths that cannot be explained with a sub-solidus dry mantle regime without unrealistic temperature gradients. Instead, the conductivity observations require either a small amount of water stored in nominally anhydrous minerals or the presence of interconnected partial melts. This ambiguity leads to dramatically different interpretations on the origin of the asthenosphere. Here, I apply the damp peridotite solidus together with plate cooling models to determine the amount of H2O needed to induce dehydration melting as a function of depth and plate age. Then, I use the temperature and water content estimates to calculate the electrical conductivity of the oceanic mantle with a two-phase mixture of olivine and pyroxene from several competing empirical conductivity models. This represents the maximum potential conductivity of sub-solidus oceanic mantle at the limit of hydration. The results show that partial melt is required to explain the subset of the high conductivity observations beneath young seafloor, irrespective of which empirical model is applied. In contrast, the end-member empirical models predict either nearly dry (<20 wt ppm H2O) or slightly damp (<200 wt ppm H2O) asthenosphere for observations of mature seafloor. Since the former estimate is too dry compared with geochemical constraints from mid-ocean ridge basalts, this suggests the effect of water on mantle conductivity is less pronounced than currently predicted by the conductive end-member empirical model.
NASA Astrophysics Data System (ADS)
He, Hong-di; Lu, Wei-Zhen; Xue, Yu
2009-12-01
At urban traffic intersections, vehicles frequently stop with idling engines during the red-light period and speed up rapidly during the green-light period. The changes of driving patterns (i.e., idle, acceleration, deceleration and cruising patterns) generally produce uncertain emission. Additionally, the movement of pedestrians and the influence of wind further result in the random dispersion of pollutants. It is, therefore, too complex to simulate the effects of such dynamics on the resulting emission using conventional deterministic causal models. For this reason, a modified semi-empirical box model for predicting the PM 10 concentrations on roadsides is proposed in this paper. The model constitutes three parts, i.e., traffic, emission and dispersion components. The traffic component is developed using a generalized force traffic model to obtain the instantaneous velocity and acceleration when vehicles move through intersections. Hence the distribution of vehicle emission in street canyon during the green-light period is calculated. Then the dispersion component is investigated using a semi-empirical box model combining average wind speed, box height and background concentrations. With these considerations, the proposed model is applied and evaluated using measured data at a busy traffic intersection in Mong Kok, Hong Kong. In order to test the performance of the model, two situations, i.e., the data sets within a sunny day and between two sunny days, were selected to examine the model performance. The predicted values are generally well coincident with the observed data during different time slots except several values are overestimated or underestimated. Moreover, two types of vehicles, i.e., buses and petrol cars, are separately taken into account in the study. Buses are verified to contribute most to the emission in street canyons, which may be useful in evaluating the impact of vehicle emissions on the ambient air quality when there is a significant change in a specific vehicular population.
Extending Theory-Based Quantitative Predictions to New Health Behaviors.
Brick, Leslie Ann D; Velicer, Wayne F; Redding, Colleen A; Rossi, Joseph S; Prochaska, James O
2016-04-01
Traditional null hypothesis significance testing suffers many limitations and is poorly adapted to theory testing. A proposed alternative approach, called Testing Theory-based Quantitative Predictions, uses effect size estimates and confidence intervals to directly test predictions based on theory. This paper replicates findings from previous smoking studies and extends the approach to diet and sun protection behaviors using baseline data from a Transtheoretical Model behavioral intervention (N = 5407). Effect size predictions were developed using two methods: (1) applying refined effect size estimates from previous smoking research or (2) using predictions developed by an expert panel. Thirteen of 15 predictions were confirmed for smoking. For diet, 7 of 14 predictions were confirmed using smoking predictions and 6 of 16 using expert panel predictions. For sun protection, 3 of 11 predictions were confirmed using smoking predictions and 5 of 19 using expert panel predictions. Expert panel predictions and smoking-based predictions poorly predicted effect sizes for diet and sun protection constructs. Future studies should aim to use previous empirical data to generate predictions whenever possible. The best results occur when there have been several iterations of predictions for a behavior, such as with smoking, demonstrating that expected values begin to converge on the population effect size. Overall, the study supports necessity in strengthening and revising theory with empirical data.
Bryant, Fred B
2016-12-01
This paper introduces a special section of the current issue of the Journal of Evaluation in Clinical Practice that includes a set of 6 empirical articles showcasing a versatile, new machine-learning statistical method, known as optimal data (or discriminant) analysis (ODA), specifically designed to produce statistical models that maximize predictive accuracy. As this set of papers clearly illustrates, ODA offers numerous important advantages over traditional statistical methods-advantages that enhance the validity and reproducibility of statistical conclusions in empirical research. This issue of the journal also includes a review of a recently published book that provides a comprehensive introduction to the logic, theory, and application of ODA in empirical research. It is argued that researchers have much to gain by using ODA to analyze their data. © 2016 John Wiley & Sons, Ltd.
Predicting Students' Homework Environment Management at the Secondary School Level
ERIC Educational Resources Information Center
Xu, Jianzhong
2012-01-01
The present study examined empirical models of variables posited to predict students' homework environment management at the secondary school level. The participants were 866 8th graders from 61 classes and 745 11th graders from 46 classes. Most of the variance in homework environment management occurred at the student level, with classmates'…
Predicting Career Choice in College Women: Empirical Test of a Theory-Based Model.
ERIC Educational Resources Information Center
Eisler, Terri A.; Iverson, Barbara
While investigations of the impact of parental factors on children's career choices have identified variables that appear predictive of career choice in males, variables which influence career choice in females are less well documented. This study used social learning theory as a framework for examining the impact of parental reinforcement,…
An empirical analysis of the corporate call decision
NASA Astrophysics Data System (ADS)
Carlson, Murray Dean
1998-12-01
In this thesis we provide insights into the behavior of financial managers of utility companies by studying their decisions to redeem callable preferred shares. In particular, we investigate whether or not an option pricing based model of the call decision, with managers who maximize shareholder value, does a better job of explaining callable preferred share prices and call decisions than do other models of the decision. In order to perform these tests, we extend an empirical technique introduced by Rust (1987) to include the use of information from preferred share prices in addition to the call decisions. The model we develop to value the option embedded in a callable preferred share differs from standard models in two ways. First, as suggested in Kraus (1983), we explicitly account for transaction costs associated with a redemption. Second, we account for state variables that are observed by the decision makers but not by the preferred shareholders. We interpret these unobservable state variables as the benefits and costs associated with a change in capital structure that can accompany a call decision. When we add this variable, our empirical model changes from one which predicts exactly when a share should be called to one which predicts the probability of a call as the function of the observable state. These two modifications of the standard model result in predictions of calls, and therefore of callable preferred share prices, that are consistent with several previously unexplained features of the data; we show that the predictive power of the model is improved in a statistical sense by adding these features to the model. The pricing and call probability functions from our model do a good job of describing call decisions and preferred share prices for several utilities. Using data from shares of the Pacific Gas and Electric Co. (PGE) we obtain reasonable estimates for the transaction costs associated with a call. Using a formal empirical test, we are able to conclude that the managers of the Pacific Gas and Electric Company clearly take into account the value of the option to delay the call when making their call decisions. Overall, the model seems to be robust to tests of its specification and does a better job of describing the data than do simpler models of the decision making process. Limitations in the data do not allow us to perform the same tests in a larger cross-section of utility companies. However, we are able to estimate transaction cost parameters for many firms and these do not seem to vary significantly from those of PGE. This evidence does not cause us to reject our hypothesis that managerial behavior is consistent with a model in which managers maximize shareholder value.
Kershenbaum, Arik; Blank, Lior; Sinai, Iftach; Merilä, Juha; Blaustein, Leon; Templeton, Alan R
2014-06-01
When populations reside within a heterogeneous landscape, isolation by distance may not be a good predictor of genetic divergence if dispersal behaviour and therefore gene flow depend on landscape features. Commonly used approaches linking landscape features to gene flow include the least cost path (LCP), random walk (RW), and isolation by resistance (IBR) models. However, none of these models is likely to be the most appropriate for all species and in all environments. We compared the performance of LCP, RW and IBR models of dispersal with the aid of simulations conducted on artificially generated landscapes. We also applied each model to empirical data on the landscape genetics of the endangered fire salamander, Salamandra infraimmaculata, in northern Israel, where conservation planning requires an understanding of the dispersal corridors. Our simulations demonstrate that wide dispersal corridors of the low-cost environment facilitate dispersal in the IBR model, but inhibit dispersal in the RW model. In our empirical study, IBR explained the genetic divergence better than the LCP and RW models (partial Mantel correlation 0.413 for IBR, compared to 0.212 for LCP, and 0.340 for RW). Overall dispersal cost in salamanders was also well predicted by landscape feature slope steepness (76%), and elevation (24%). We conclude that fire salamander dispersal is well characterised by IBR predictions. Together with our simulation findings, these results indicate that wide dispersal corridors facilitate, rather than hinder, salamander dispersal. Comparison of genetic data to dispersal model outputs can be a useful technique in inferring dispersal behaviour from population genetic data.
A CFD Study on the Prediction of Cyclone Collection Efficiency
NASA Astrophysics Data System (ADS)
Gimbun, Jolius; Chuah, T. G.; Choong, Thomas S. Y.; Fakhru'L-Razi, A.
2005-09-01
This work presents a Computational Fluid Dynamics calculation to predict and to evaluate the effects of temperature, operating pressure and inlet velocity on the collection efficiency of gas cyclones. The numerical solutions were carried out using spreadsheet and commercial CFD code FLUENT 6.0. This paper also reviews four empirical models for the prediction of cyclone collection efficiency, namely Lapple [1], Koch and Licht [2], Li and Wang [3], and Iozia and Leith [4]. All the predictions proved to be satisfactory when compared with the presented experimental data. The CFD simulations predict the cyclone cut-off size for all operating conditions with a deviation of 3.7% from the experimental data. Specifically, results obtained from the computer modelling exercise have demonstrated that CFD model is the best method of modelling the cyclones collection efficiency.
Characterizing attention with predictive network models
Rosenberg, M. D.; Finn, E. S.; Scheinost, D.; Constable, R. T.; Chun, M. M.
2017-01-01
Recent work shows that models based on functional connectivity in large-scale brain networks can predict individuals’ attentional abilities. Some of the first generalizable neuromarkers of cognitive function, these models also inform our basic understanding of attention, providing empirical evidence that (1) attention is a network property of brain computation, (2) the functional architecture that underlies attention can be measured while people are not engaged in any explicit task, and (3) this architecture supports a general attentional ability common to several lab-based tasks and impaired in attention deficit hyperactivity disorder. Looking ahead, connectivity-based predictive models of attention and other cognitive abilities and behaviors may potentially improve the assessment, diagnosis, and treatment of clinical dysfunction. PMID:28238605
A quantitative dynamic systems model of health-related quality of life among older adults
Roppolo, Mattia; Kunnen, E Saskia; van Geert, Paul L; Mulasso, Anna; Rabaglietti, Emanuela
2015-01-01
Health-related quality of life (HRQOL) is a person-centered concept. The analysis of HRQOL is highly relevant in the aged population, which is generally suffering from health decline. Starting from a conceptual dynamic systems model that describes the development of HRQOL in individuals over time, this study aims to develop and test a quantitative dynamic systems model, in order to reveal the possible dynamic trends of HRQOL among older adults. The model is tested in different ways: first, with a calibration procedure to test whether the model produces theoretically plausible results, and second, with a preliminary validation procedure using empirical data of 194 older adults. This first validation tested the prediction that given a particular starting point (first empirical data point), the model will generate dynamic trajectories that lead to the observed endpoint (second empirical data point). The analyses reveal that the quantitative model produces theoretically plausible trajectories, thus providing support for the calibration procedure. Furthermore, the analyses of validation show a good fit between empirical and simulated data. In fact, no differences were found in the comparison between empirical and simulated final data for the same subgroup of participants, whereas the comparison between different subgroups of people resulted in significant differences. These data provide an initial basis of evidence for the dynamic nature of HRQOL during the aging process. Therefore, these data may give new theoretical and applied insights into the study of HRQOL and its development with time in the aging population. PMID:26604722
Zhang, Zheng; Dai, Weimin; Song, Xiaoling; Qiang, Sheng
2014-05-01
A heavy infestation of weedy rice leading to no harvested rice has never been predicted in China due to a lack of knowledge about the weedy rice seed bank. We studied the seed-bank dynamics of weedy rice for three consecutive years and analyzed the relationship between seed-bank density and population density in order to predict future weedy rice infestations of direct-seeded rice at six sites along the Yangtze River in Jiangsu Province, China. The seed-bank density of weedy rice in all six sites displayed an increasing trend with seasonal fluctuations. Weedy rice seeds found in the 0-10 cm soil layer contributed most to seedling emergence. An exponential curve expressed the relationship between cultivated rice yield loss and adult weedy rice density. Based on data collected during the weedy rice life-cycle, a semi-empirical mathematic model was developed that fits well with the experimental data in a way that could be used to predict seed-bank dynamics. By integrating the semi-empirical model and the exponential curve, weedy rice infestation levels and crop losses can be predicted based on the seed-bank dynamics so that a practical control can be adopted before rice planting. © 2013 Society of Chemical Industry.
Variance-based selection may explain general mating patterns in social insects.
Rueppell, Olav; Johnson, Nels; Rychtár, Jan
2008-06-23
Female mating frequency is one of the key parameters of social insect evolution. Several hypotheses have been suggested to explain multiple mating and considerable empirical research has led to conflicting results. Building on several earlier analyses, we present a simple general model that links the number of queen matings to variance in colony performance and this variance to average colony fitness. The model predicts selection for multiple mating if the average colony succeeds in a focal task, and selection for single mating if the average colony fails, irrespective of the proximate mechanism that links genetic diversity to colony fitness. Empirical support comes from interspecific comparisons, e.g. between the bee genera Apis and Bombus, and from data on several ant species, but more comprehensive empirical tests are needed.
Predictive information processing in music cognition. A critical review.
Rohrmeier, Martin A; Koelsch, Stefan
2012-02-01
Expectation and prediction constitute central mechanisms in the perception and cognition of music, which have been explored in theoretical and empirical accounts. We review the scope and limits of theoretical accounts of musical prediction with respect to feature-based and temporal prediction. While the concept of prediction is unproblematic for basic single-stream features such as melody, it is not straight-forward for polyphonic structures or higher-order features such as formal predictions. Behavioural results based on explicit and implicit (priming) paradigms provide evidence of priming in various domains that may reflect predictive behaviour. Computational learning models, including symbolic (fragment-based), probabilistic/graphical, or connectionist approaches, provide well-specified predictive models of specific features and feature combinations. While models match some experimental results, full-fledged music prediction cannot yet be modelled. Neuroscientific results regarding the early right-anterior negativity (ERAN) and mismatch negativity (MMN) reflect expectancy violations on different levels of processing complexity, and provide some neural evidence for different predictive mechanisms. At present, the combinations of neural and computational modelling methodologies are at early stages and require further research. Copyright © 2012 Elsevier B.V. All rights reserved.
Review of Nearshore Morphologic Prediction
NASA Astrophysics Data System (ADS)
Plant, N. G.; Dalyander, S.; Long, J.
2014-12-01
The evolution of the world's erodible coastlines will determine the balance between the benefits and costs associated with human and ecological utilization of shores, beaches, dunes, barrier islands, wetlands, and estuaries. So, we would like to predict coastal evolution to guide management and planning of human and ecological response to coastal changes. After decades of research investment in data collection, theoretical and statistical analysis, and model development we have a number of empirical, statistical, and deterministic models that can predict the evolution of the shoreline, beaches, dunes, and wetlands over time scales of hours to decades, and even predict the evolution of geologic strata over the course of millennia. Comparisons of predictions to data have demonstrated that these models can have meaningful predictive skill. But these comparisons also highlight the deficiencies in fundamental understanding, formulations, or data that are responsible for prediction errors and uncertainty. Here, we review a subset of predictive models of the nearshore to illustrate tradeoffs in complexity, predictive skill, and sensitivity to input data and parameterization errors. We identify where future improvement in prediction skill will result from improved theoretical understanding, and data collection, and model-data assimilation.
Information on human behavior and consumer product use is important for characterizing exposures to chemicals in consumer products and in indoor environments. Traditionally, exposure-assessors have relied on time-use surveys to obtain information on exposure-related behavior. In ...
Empirical Identification of the Major Facets of Conscientiousness
ERIC Educational Resources Information Center
MacCann, Carolyn; Duckworth, Angela Lee; Roberts, Richard D.
2009-01-01
Conscientiousness is often found to predict academic outcomes, but is defined differently by different models of personality. High school students (N = 291) completed a large number of Conscientiousness items from different models and the Big Five Inventory (BFI). Exploratory and confirmatory factor analysis of the items uncovered eight facets:…
USDA-ARS?s Scientific Manuscript database
Leaf area index (LAI) is a critical variable for predicting the growth and productivity of crops. Remote sensing estimates of LAI have relied upon empirical relationships between spectral vegetation indices and ground measurements that are costly to obtain. Radiative transfer model inversion based o...
NASA Astrophysics Data System (ADS)
Carozza, D. A.; Bianchi, D.; Galbraith, E. D.
2015-12-01
Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modeling fish biomass at the global scale. The ecological model is designed to be used on an Earth System model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how the change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modeling efforts, while retaining realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.
NASA Astrophysics Data System (ADS)
Carozza, David Anthony; Bianchi, Daniele; Galbraith, Eric Douglas
2016-04-01
Environmental change and the exploitation of marine resources have had profound impacts on marine communities, with potential implications for ocean biogeochemistry and food security. In order to study such global-scale problems, it is helpful to have computationally efficient numerical models that predict the first-order features of fish biomass production as a function of the environment, based on empirical and mechanistic understandings of marine ecosystems. Here we describe the ecological module of the BiOeconomic mArine Trophic Size-spectrum (BOATS) model, which takes an Earth-system approach to modelling fish biomass at the global scale. The ecological model is designed to be used on an Earth-system model grid, and determines size spectra of fish biomass by explicitly resolving life history as a function of local temperature and net primary production. Biomass production is limited by the availability of photosynthetic energy to upper trophic levels, following empirical trophic efficiency scalings, and by well-established empirical temperature-dependent growth rates. Natural mortality is calculated using an empirical size-based relationship, while reproduction and recruitment depend on both the food availability to larvae from net primary production and the production of eggs by mature adult fish. We describe predicted biomass spectra and compare them to observations, and conduct a sensitivity study to determine how they change as a function of net primary production and temperature. The model relies on a limited number of parameters compared to similar modelling efforts, while retaining reasonably realistic representations of biological and ecological processes, and is computationally efficient, allowing extensive parameter-space analyses even when implemented globally. As such, it enables the exploration of the linkages between ocean biogeochemistry, climate, and upper trophic levels at the global scale, as well as a representation of fish biomass for idealized studies of fisheries.
Diseth, Age; Martinsen, Øyvind
2009-04-01
Theoretical and empirical relations between personality traits and motive dispositions were investigated by comparing scores of 315 undergraduate psychology students on the NEO Personality Inventory-Revised and the Achievement Motives Scale. Analyses showed all NEO Personality Inventory-Revised factors except agreeableness were significantly correlated with the motive for success and the motive to avoid failure. A structural equation model showed that motive for success was predicted by Extraversion, Openness, Conscientiousness, and Neuroticism (negative relation), and motive to avoid failure was predicted by Neuroticism and Openness (negative relation). Although both achievement motives were predicted by several personality factors, motive for success was most strongly predicted by Openness, and motive to avoid failure was most strongly predicted by neuroticism. These findings extended previous research on the relations of personality traits and achievement motives and provided a basis for the discussion of motive dispositions in personality. The results also added to the construct validity of the Achievement Motives Scale.
Kyogoku, Daisuke; Sota, Teiji
2017-05-17
Interspecific mating interactions, or reproductive interference, can affect population dynamics, species distribution and abundance. Previous population dynamics models have assumed that the impact of frequency-dependent reproductive interference depends on the relative abundances of species. However, this assumption could be an oversimplification inappropriate for making quantitative predictions. Therefore, a more general model to forecast population dynamics in the presence of reproductive interference is required. Here we developed a population dynamics model to describe the absolute density dependence of reproductive interference, which appears likely when encounter rate between individuals is important. Our model (i) can produce diverse shapes of isoclines depending on parameter values and (ii) predicts weaker reproductive interference when absolute density is low. These novel characteristics can create conditions where coexistence is stable and independent from the initial conditions. We assessed the utility of our model in an empirical study using an experimental pair of seed beetle species, Callosobruchus maculatus and Callosobruchus chinensis. Reproductive interference became stronger with increasing total beetle density even when the frequencies of the two species were kept constant. Our model described the effects of absolute density and showed a better fit to the empirical data than the existing model overall.
Multivariate Statistical Models for Predicting Sediment Yields from Southern California Watersheds
Gartner, Joseph E.; Cannon, Susan H.; Helsel, Dennis R.; Bandurraga, Mark
2009-01-01
Debris-retention basins in Southern California are frequently used to protect communities and infrastructure from the hazards of flooding and debris flow. Empirical models that predict sediment yields are used to determine the size of the basins. Such models have been developed using analyses of records of the amount of material removed from debris retention basins, associated rainfall amounts, measures of watershed characteristics, and wildfire extent and history. In this study we used multiple linear regression methods to develop two updated empirical models to predict sediment yields for watersheds located in Southern California. The models are based on both new and existing measures of volume of sediment removed from debris retention basins, measures of watershed morphology, and characterization of burn severity distributions for watersheds located in Ventura, Los Angeles, and San Bernardino Counties. The first model presented reflects conditions in watersheds located throughout the Transverse Ranges of Southern California and is based on volumes of sediment measured following single storm events with known rainfall conditions. The second model presented is specific to conditions in Ventura County watersheds and was developed using volumes of sediment measured following multiple storm events. To relate sediment volumes to triggering storm rainfall, a rainfall threshold was developed to identify storms likely to have caused sediment deposition. A measured volume of sediment deposited by numerous storms was parsed among the threshold-exceeding storms based on relative storm rainfall totals. The predictive strength of the two models developed here, and of previously-published models, was evaluated using a test dataset consisting of 65 volumes of sediment yields measured in Southern California. The evaluation indicated that the model developed using information from single storm events in the Transverse Ranges best predicted sediment yields for watersheds in San Bernardino, Los Angeles, and Ventura Counties. This model predicts sediment yield as a function of the peak 1-hour rainfall, the watershed area burned by the most recent fire (at all severities), the time since the most recent fire, watershed area, average gradient, and relief ratio. The model that reflects conditions specific to Ventura County watersheds consistently under-predicted sediment yields and is not recommended for application. Some previously-published models performed reasonably well, while others either under-predicted sediment yields or had a larger range of errors in the predicted sediment yields.
Implementation of model predictive control for resistive wall mode stabilization on EXTRAP T2R
NASA Astrophysics Data System (ADS)
Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.
2015-10-01
A model predictive control (MPC) method for stabilization of the resistive wall mode (RWM) in the EXTRAP T2R reversed-field pinch is presented. The system identification technique is used to obtain a linearized empirical model of EXTRAP T2R. MPC employs the model for prediction and computes optimal control inputs that satisfy performance criterion. The use of a linearized form of the model allows for compact formulation of MPC, implemented on a millisecond timescale, that can be used for real-time control. The design allows the user to arbitrarily suppress any selected Fourier mode. The experimental results from EXTRAP T2R show that the designed and implemented MPC successfully stabilizes the RWM.
Fiorentine, Robert; Hillhouse, Maureen P
2004-01-01
Although previous research provided empirical support for the main assumptions of the Addicted-Self (A-S) Model of recovery, it is not known whether the model predicts recovery for various gender, ethnic, age, and drug preference populations. It may be that the model predicts recovery only for some groups of addicts and should not be viewed as a general theory of the recovery process. Addressing this concern using data from the Los Angeles Target Cities Drug Treatment Enhancement Project, it was determined that only trivial population differences exist in the primary variables associated with the A-S Model. The A-S Model predicts abstinence with about the same degree of accuracy and parsimony for all populations. The findings indicate that the A-S Model is a general theory of drug and alcohol addictive behavior cessation.
ERIC Educational Resources Information Center
Paton, David
2006-01-01
Rational choice models of teenage sexual behaviour lead to radically different predictions than do models that assume such behaviour is random. Existing empirical evidence has not been able to distinguish conclusively between these competing models. I use regional data from England between 1998 and 2001 to examine the impact of recent increases in…
Development of a rotor wake-vortex model, volume 1
NASA Technical Reports Server (NTRS)
Majjigi, R. K.; Gliebe, P. R.
1984-01-01
Certain empirical rotor wake and turbulence relationships were developed using existing low speed rotor wave data. A tip vortex model was developed by replacing the annulus wall with a row of image vortices. An axisymmetric turbulence spectrum model, developed in the context of rotor inflow turbulence, was adapted to predicting the turbulence spectrum of the stator gust upwash.
ERIC Educational Resources Information Center
Cheung, Ronnie; Vogel, Doug
2013-01-01
Collaborative technologies support group work in project-based environments. In this study, we enhance the technology acceptance model to explain the factors that influence the acceptance of Google Applications for collaborative learning. The enhanced model was empirically evaluated using survey data collected from 136 students enrolled in a…
I present a simple, macroecological model of fish abundance that was used to estimate the total number of non-migratory salmonids within the Willamette River Basin (western Oregon). The model begins with empirical point estimates of net primary production (NPP in g C/m2) in fore...
Development of an Empirical Methods for Predicting Jet Mixing Noise of Cold Flow Rectangular Jets
NASA Technical Reports Server (NTRS)
Russell, James W.
1999-01-01
This report presents an empirical method for predicting the jet mixing noise levels of cold flow rectangular jets. The report presents a detailed analysis of the methodology used in development of the prediction method. The empirical correlations used are based on narrow band acoustic data for cold flow rectangular model nozzle tests conducted in the NASA Langley Jet Noise Laboratory. There were 20 separate nozzle test operating conditions. For each operating condition 60 Hz bandwidth microphone measurements were made over a frequency range from 0 to 60,000 Hz. Measurements were performed at 16 polar directivity angles ranging from 45 degrees to 157.5 degrees. At each polar directivity angle, measurements were made at 9 azimuth directivity angles. The report shows the methods employed to remove screech tones and shock noise from the data in order to obtain the jet mixing noise component. The jet mixing noise was defined in terms of one third octave band spectral content, polar and azimuth directivity, and overall power level. Empirical correlations were performed over the range of test conditions to define each of these jet mixing noise parameters as a function of aspect ratio, jet velocity, and polar and azimuth directivity angles. The report presents the method for predicting the overall power level, the average polar directivity, the azimuth directivity and the location and shape of the spectra for jet mixing noise of cold flow rectangular jets.
NASA Astrophysics Data System (ADS)
Emami Niri, Mohammad; Amiri Kolajoobi, Rasool; Khodaiy Arbat, Mohammad; Shahbazi Raz, Mahdi
2018-06-01
Seismic wave velocities, along with petrophysical data, provide valuable information during the exploration and development stages of oil and gas fields. The compressional-wave velocity (VP ) is acquired using conventional acoustic logging tools in many drilled wells. But the shear-wave velocity (VS ) is recorded using advanced logging tools only in a limited number of wells, mainly because of the high operational costs. In addition, laboratory measurements of seismic velocities on core samples are expensive and time consuming. So, alternative methods are often used to estimate VS . Heretofore, several empirical correlations that predict VS by using well logging measurements and petrophysical data such as VP , porosity and density are proposed. However, these empirical relations can only be used in limited cases. The use of intelligent systems and optimization algorithms are inexpensive, fast and efficient approaches for predicting VS. In this study, in addition to the widely used Greenberg–Castagna empirical method, we implement three relatively recently developed metaheuristic algorithms to construct linear and nonlinear models for predicting VS : teaching–learning based optimization, imperialist competitive and artificial bee colony algorithms. We demonstrate the applicability and performance of these algorithms to predict Vs using conventional well logs in two field data examples, a sandstone formation from an offshore oil field and a carbonate formation from an onshore oil field. We compared the estimated VS using each of the employed metaheuristic approaches with observed VS and also with those predicted by Greenberg–Castagna relations. The results indicate that, for both sandstone and carbonate case studies, all three implemented metaheuristic algorithms are more efficient and reliable than the empirical correlation to predict VS . The results also demonstrate that in both sandstone and carbonate case studies, the performance of an artificial bee colony algorithm in VS prediction is slightly higher than two other alternative employed approaches.
Nelson, Jonathan M.; Shimizu, Yasuyuki; Giri, Sanjay; McDonald, Richard R.
2010-01-01
Uncertainties in flood stage prediction and bed evolution in rivers are frequently associated with the evolution of bedforms over a hydrograph. For the case of flood prediction, the evolution of the bedforms may alter the effective bed roughness, so predictions of stage and velocity based on assuming bedforms retain the same size and shape over a hydrograph will be incorrect. These same effects will produce errors in the prediction of the sediment transport and bed evolution, but in this latter case the errors are typically larger, as even small errors in the prediction of bedform form drag can make very large errors in predicting the rates of sediment motion and the associated erosion and deposition. In situations where flows change slowly, it may be possible to use empirical results that relate bedform morphology to roughness and effective form drag to avoid these errors; but in many cases where the bedforms evolve rapidly and are in disequilibrium with the instantaneous flow, these empirical methods cannot be accurately applied. Over the past few years, computational models for bedform development, migration, and adjustment to varying flows have been developed and tested with a variety of laboratory and field data. These models, which are based on detailed multidimensional flow modeling incorporating large eddy simulation, appear to be capable of predicting bedform dimensions during steady flows as well as their time dependence during discharge variations. In the work presented here, models of this type are used to investigate the impacts of bedform on stage and bed evolution in rivers during flood hydrographs. The method is shown to reproduce hysteresis in rating curves as well as other more subtle effects in the shape of flood waves. Techniques for combining the bedform evolution models with larger-scale models for river reach flow, sediment transport, and bed evolution are described and used to show the importance of including dynamic bedform effects in river modeling. For example calculations for a flood on the Kootenai River, errors of almost 1m in predicted stage and errors of about a factor of two in the predicted maximum depths of erosion can be attributed to bedform evolution. Thus, treating bedforms explicitly in flood and bed evolution models can decrease uncertainty and increase the accuracy of predictions.
A discrete element method-based approach to predict the breakage of coal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Varun; Sun, Xin; Xu, Wei
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
A discrete element method-based approach to predict the breakage of coal
Gupta, Varun; Sun, Xin; Xu, Wei; ...
2017-08-05
Pulverization is an essential pre-combustion technique employed for solid fuels, such as coal, to reduce particle sizes. Smaller particles ensure rapid and complete combustion, leading to low carbon emissions. Traditionally, the resulting particle size distributions from pulverizers have been determined by empirical or semi-empirical approaches that rely on extensive data gathered over several decades during operations or experiments, with limited predictive capabilities for new coals and processes. Our work presents a Discrete Element Method (DEM)-based computational approach to model coal particle breakage with experimentally characterized coal physical properties. We also examined the effect of select operating parameters on the breakagemore » behavior of coal particles.« less
Creative Destruction and Subjective Well-Being
Aghion, Philippe; Akcigit, Ufuk; Deaton, Angus; Roulet, Alexandra
2017-01-01
In this paper we analyze the relationship between turnover-driven growth and subjective wellbeing. Our model of innovation-led growth and unemployment predicts that: (i) the effect of creative destruction on expected individual welfare should be unambiguously positive if we control for unemployment, less so if we do not; (ii) job creation has a positive and job destruction has a negative impact on wellbeing; (iii) job destruction has a less negative impact in US Metropolitan Statistical Areas (MSA) within states with more generous unemployment insurance policies; (iv) job creation has a more positive effect on individuals that are more forward-looking. The empirical analysis using cross-sectional MSA-level and individual-level data provide empirical support to these predictions. PMID:28713168
Prediction of breakdown strength of cellulosic insulating materials using artificial neural networks
NASA Astrophysics Data System (ADS)
Singh, Sakshi; Mohsin, M. M.; Masood, Aejaz
In this research work, a few sets of experiments have been performed in high voltage laboratory on various cellulosic insulating materials like diamond-dotted paper, paper phenolic sheets, cotton phenolic sheets, leatheroid, and presspaper, to measure different electrical parameters like breakdown strength, relative permittivity, loss tangent, etc. Considering the dependency of breakdown strength on other physical parameters, different Artificial Neural Network (ANN) models are proposed for the prediction of breakdown strength. The ANN model results are compared with those obtained experimentally and also with the values already predicted from an empirical relation suggested by Swanson and Dall. The reported results indicated that the breakdown strength predicted from the ANN model is in good agreement with the experimental values.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
NASA Astrophysics Data System (ADS)
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Ohshiro, Tomokazu; Angelaki, Dora E; DeAngelis, Gregory C
2017-07-19
Studies of multisensory integration by single neurons have traditionally emphasized empirical principles that describe nonlinear interactions between inputs from two sensory modalities. We previously proposed that many of these empirical principles could be explained by a divisive normalization mechanism operating in brain regions where multisensory integration occurs. This normalization model makes a critical diagnostic prediction: a non-preferred sensory input from one modality, which activates the neuron on its own, should suppress the response to a preferred input from another modality. We tested this prediction by recording from neurons in macaque area MSTd that integrate visual and vestibular cues regarding self-motion. We show that many MSTd neurons exhibit the diagnostic form of cross-modal suppression, whereas unisensory neurons in area MT do not. The normalization model also fits population responses better than a model based on subtractive inhibition. These findings provide strong support for a divisive normalization mechanism in multisensory integration. Copyright © 2017 Elsevier Inc. All rights reserved.
Modelled drift patterns of fish larvae link coastal morphology to seabird colony distribution
Sandvik, Hanno; Barrett, Robert T.; Erikstad, Kjell Einar; Myksvoll, Mari S.; Vikebø, Frode; Yoccoz, Nigel G.; Anker-Nilssen, Tycho; Lorentsen, Svein-Håkon; Reiertsen, Tone K.; Skarðhamar, Jofrid; Skern-Mauritzen, Mette; Systad, Geir Helge
2016-01-01
Colonial breeding is an evolutionary puzzle, as the benefits of breeding in high densities are still not fully explained. Although the dynamics of existing colonies are increasingly understood, few studies have addressed the initial formation of colonies, and empirical tests are rare. Using a high-resolution larval drift model, we here document that the distribution of seabird colonies along the Norwegian coast can be explained by variations in the availability and predictability of fish larvae. The modelled variability in concentration of fish larvae is, in turn, predicted by the topography of the continental shelf and coastline. The advection of fish larvae along the coast translates small-scale topographic characteristics into a macroecological pattern, viz. the spatial distribution of top-predator breeding sites. Our findings provide empirical corroboration of the hypothesis that seabird colonies are founded in locations that minimize travel distances between breeding and foraging locations, thereby enabling optimal foraging by central-place foragers. PMID:27173005
Semi-empirical proton binding constants for natural organic matter
NASA Astrophysics Data System (ADS)
Matynia, Anthony; Lenoir, Thomas; Causse, Benjamin; Spadini, Lorenzo; Jacquet, Thierry; Manceau, Alain
2010-03-01
Average proton binding constants ( KH,i) for structure models of humic (HA) and fulvic (FA) acids were estimated semi-empirically by breaking down the macromolecules into reactive structural units (RSUs), and calculating KH,i values of the RSUs using linear free energy relationships (LFER) of Hammett. Predicted log KH,COOH and log KH,Ph-OH are 3.73 ± 0.13 and 9.83 ± 0.23 for HA, and 3.80 ± 0.20 and 9.87 ± 0.31 for FA. The predicted constants for phenolic-type sites (Ph-OH) are generally higher than those derived from potentiometric titrations, but the difference may not be significant in view of the considerable uncertainty of the acidity constants determined from acid-base measurements at high pH. The predicted constants for carboxylic-type sites agree well with titration data analyzed with Model VI (4.10 ± 0.16 for HA, 3.20 ± 0.13 for FA; Tipping, 1998), the Impermeable Sphere model (3.50-4.50 for HA; Avena et al., 1999), and the Stockholm Humic Model (4.10 ± 0.20 for HA, 3.50 ± 0.40 for FA; Gustafsson, 2001), but differ by about one log unit from those obtained by Milne et al. (2001) with the NICA-Donnan model (3.09 ± 0.51 for HA, 2.65 ± 0.43 for FA), and used to derive recommended generic values. To clarify this ambiguity, 10 high-quality titration data from Milne et al. (2001) were re-analyzed with the new predicted equilibrium constants. The data are described equally well with the previous and new sets of values ( R2 ⩾ 0.98), not necessarily because the NICA-Donnan model is overparametrized, but because titration lacks the sensitivity needed to quantify the full binding properties of humic substances. Correlations between NICA-Donnan parameters are discussed, but general progress is impeded by the unknown number of independent parameters that can be varied during regression of a model fit to titration data. The high consistency between predicted and experimental KH,COOH values, excluding those of Milne et al. (2001), gives faith in the proposed semi-empirical structural approach, and its usefulness to assess the plausibility of proton stability constants derived from simulations of titration data.
Systematics of capture and fusion dynamics in heavy-ion collisions
NASA Astrophysics Data System (ADS)
Wang, Bing; Wen, Kai; Zhao, Wei-Juan; Zhao, En-Guang; Zhou, Shan-Gui
2017-03-01
We perform a systematic study of capture excitation functions by using an empirical coupled-channel (ECC) model. In this model, a barrier distribution is used to take effectively into account the effects of couplings between the relative motion and intrinsic degrees of freedom. The shape of the barrier distribution is of an asymmetric Gaussian form. The effect of neutron transfer channels is also included in the barrier distribution. Based on the interaction potential between the projectile and the target, empirical formulas are proposed to determine the parameters of the barrier distribution. Theoretical estimates for barrier distributions and calculated capture cross sections together with experimental cross sections of 220 reaction systems with 182 ⩽ZPZT ⩽ 1640 are tabulated. The results show that the ECC model together with the empirical formulas for parameters of the barrier distribution work quite well in the energy region around the Coulomb barrier. This ECC model can provide prediction of capture cross sections for the synthesis of superheavy nuclei as well as valuable information on capture and fusion dynamics.
A Bayesian estimation of a stochastic predator-prey model of economic fluctuations
NASA Astrophysics Data System (ADS)
Dibeh, Ghassan; Luchinsky, Dmitry G.; Luchinskaya, Daria D.; Smelyanskiy, Vadim N.
2007-06-01
In this paper, we develop a Bayesian framework for the empirical estimation of the parameters of one of the best known nonlinear models of the business cycle: The Marx-inspired model of a growth cycle introduced by R. M. Goodwin. The model predicts a series of closed cycles representing the dynamics of labor's share and the employment rate in the capitalist economy. The Bayesian framework is used to empirically estimate a modified Goodwin model. The original model is extended in two ways. First, we allow for exogenous periodic variations of the otherwise steady growth rates of the labor force and productivity per worker. Second, we allow for stochastic variations of those parameters. The resultant modified Goodwin model is a stochastic predator-prey model with periodic forcing. The model is then estimated using a newly developed Bayesian estimation method on data sets representing growth cycles in France and Italy during the years 1960-2005. Results show that inference of the parameters of the stochastic Goodwin model can be achieved. The comparison of the dynamics of the Goodwin model with the inferred values of parameters demonstrates quantitative agreement with the growth cycle empirical data.
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, L.; Britt, J.; Birkmire, R.
ITN Energy Systems, Inc., and Global Solar Energy, Inc., assisted by NREL's PV Manufacturing R&D program, have continued to advance CIGS production technology by developing trajectory-oriented predictive/control models, fault-tolerance control, control platform development, in-situ sensors, and process improvements. Modeling activities included developing physics-based and empirical models for CIGS and sputter-deposition processing, implementing model-based control, and applying predictive models to the construction of new evaporation sources and for control. Model-based control is enabled by implementing reduced or empirical models into a control platform. Reliability improvement activities include implementing preventive maintenance schedules; detecting failed sensors/equipment and reconfiguring to tinue processing; and systematicmore » development of fault prevention and reconfiguration strategies for the full range of CIGS PV production deposition processes. In-situ sensor development activities have resulted in improved control and indicated the potential for enhanced process status monitoring and control of the deposition processes. Substantial process improvements have been made, including significant improvement in CIGS uniformity, thickness control, efficiency, yield, and throughput. In large measure, these gains have been driven by process optimization, which in turn have been enabled by control and reliability improvements due to this PV Manufacturing R&D program.« less
Validity of empirical models of exposure in asphalt paving
Burstyn, I; Boffetta, P; Burr, G; Cenni, A; Knecht, U; Sciarra, G; Kromhout, H
2002-01-01
Aims: To investigate the validity of empirical models of exposure to bitumen fume and benzo(a)pyrene, developed for a historical cohort study of asphalt paving in Western Europe. Methods: Validity was evaluated using data from the USA, Italy, and Germany not used to develop the original models. Correlation between observed and predicted exposures was examined. Bias and precision were estimated. Results: Models were imprecise. Furthermore, predicted bitumen fume exposures tended to be lower (-70%) than concentrations found during paving in the USA. This apparent bias might be attributed to differences between Western European and USA paving practices. Evaluation of the validity of the benzo(a)pyrene exposure model revealed a similar to expected effect of re-paving and a larger than expected effect of tar use. Overall, benzo(a)pyrene models underestimated exposures by 51%. Conclusions: Possible bias as a result of underestimation of the impact of coal tar on benzo(a)pyrene exposure levels must be explored in sensitivity analysis of the exposure–response relation. Validation of the models, albeit limited, increased our confidence in their applicability to exposure assessment in the historical cohort study of cancer risk among asphalt workers. PMID:12205236
NASA Technical Reports Server (NTRS)
Makel, Darby B.; Rosenberg, Sanders D.
1990-01-01
The formation and deposition of carbon (soot) was studied in the Carbon Deposition Model for Oxygen-Hydrocarbon Combustion Program. An empirical, 1-D model for predicting soot formation and deposition in LO2/hydrocarbon gas generators/preburners was derived. The experimental data required to anchor the model were identified and a test program to obtain the data was defined. In support of the model development, cold flow mixing experiments using a high injection density injector were performed. The purpose of this investigation was to advance the state-of-the-art in LO2/hydrocarbon gas generator design by developing a reliable engineering model of gas generator operation. The model was formulated to account for the influences of fluid dynamics, chemical kinetics, and gas generator hardware design on soot formation and deposition.
NASA Astrophysics Data System (ADS)
Shaman, J.; Stieglitz, M.; Zebiak, S.; Cane, M.; Day, J. F.
2002-12-01
We present an ensemble local hydrologic forecast derived from the seasonal forecasts of the International Research Institute (IRI) for Climate Prediction. Three- month seasonal forecasts were used to resample historical meteorological conditions and generate ensemble forcing datasets for a TOPMODEL-based hydrology model. Eleven retrospective forecasts were run at a Florida and New York site. Forecast skill was assessed for mean area modeled water table depth (WTD), i.e. near surface soil wetness conditions, and compared with WTD simulated with observed data. Hydrology model forecast skill was evident at the Florida site but not at the New York site. At the Florida site, persistence of hydrologic conditions and local skill of the IRI seasonal forecast contributed to the local hydrologic forecast skill. This forecast will permit probabilistic prediction of future hydrologic conditions. At the Florida site, we have also quantified the link between modeled WTD (i.e. drought) and the amplification and transmission of St. Louis Encephalitis virus (SLEV). We derive an empirical relationship between modeled land surface wetness and levels of SLEV transmission associated with human clinical cases. We then combine the seasonal forecasts of local, modeled WTD with this empirical relationship and produce retrospective probabilistic seasonal forecasts of epidemic SLEV transmission in Florida. Epidemic SLEV transmission forecast skill is demonstrated. These findings will permit real-time forecast of drought and resultant SLEV transmission in Florida.
Henriques, D. A.; Ladbury, J. E.; Jackson, R. M.
2000-01-01
The prediction of binding energies from the three-dimensional (3D) structure of a protein-ligand complex is an important goal of biophysics and structural biology. Here, we critically assess the use of empirical, solvent-accessible surface area-based calculations for the prediction of the binding of Src-SH2 domain with a series of tyrosyl phosphopeptides based on the high-affinity ligand from the hamster middle T antigen (hmT), where the residue in the pY+ 3 position has been changed. Two other peptides based on the C-terminal regulatory site of the Src protein and the platelet-derived growth factor receptor (PDGFR) are also investigated. Here, we take into account the effects of proton linkage on binding, and test five different surface area-based models that include different treatments for the contributions to conformational change and protein solvation. These differences relate to the treatment of conformational flexibility in the peptide ligand and the inclusion of proximal ordered solvent molecules in the surface area calculations. This allowed the calculation of a range of thermodynamic state functions (deltaCp, deltaS, deltaH, and deltaG) directly from structure. Comparison with the experimentally derived data shows little agreement for the interaction of SrcSH2 domain and the range of tyrosyl phosphopeptides. Furthermore, the adoption of the different models to treat conformational change and solvation has a dramatic effect on the calculated thermodynamic functions, making the predicted binding energies highly model dependent. While empirical, solvent-accessible surface area based calculations are becoming widely adopted to interpret thermodynamic data, this study highlights potential problems with application and interpretation of this type of approach. There is undoubtedly some agreement between predicted and experimentally determined thermodynamic parameters: however, the tolerance of this approach is not sufficient to make it ubiquitously applicable. PMID:11106171
Fractal Theory for Permeability Prediction, Venezuelan and USA Wells
NASA Astrophysics Data System (ADS)
Aldana, Milagrosa; Altamiranda, Dignorah; Cabrera, Ana
2014-05-01
Inferring petrophysical parameters such as permeability, porosity, water saturation, capillary pressure, etc, from the analysis of well logs or other available core data has always been of critical importance in the oil industry. Permeability in particular, which is considered to be a complex parameter, has been inferred using both empirical and theoretical techniques. The main goal of this work is to predict permeability values on different wells using Fractal Theory, based on a method proposed by Pape et al. (1999). This approach uses the relationship between permeability and the geometric form of the pore space of the rock. This method is based on the modified equation of Kozeny-Carman and a fractal pattern, which allows determining permeability as a function of the cementation exponent, porosity and the fractal dimension. Data from wells located in Venezuela and the United States of America are analyzed. Employing data of porosity and permeability obtained from core samples, and applying the Fractal Theory method, we calculated the prediction equations for each well. At the beginning, this was achieved by training with 50% of the data available for each well. Afterwards, these equations were tested inferring over 100% of the data to analyze possible trends in their distribution. This procedure gave excellent results in all the wells in spite of their geographic distance, generating permeability models with the potential to accurately predict permeability logs in the remaining parts of the well for which there are no core samples, using even porority logs. Additionally, empirical models were used to determine permeability and the results were compared with those obtained by applying the fractal method. The results indicated that, although there are empirical equations that give a proper adjustment, the prediction results obtained using fractal theory give a better fit to the core reference data.
An Empirical Jet-Surface Interaction Noise Model with Temperature and Nozzle Aspect Ratio Effects
NASA Technical Reports Server (NTRS)
Brown, Cliff
2015-01-01
An empirical model for jet-surface interaction (JSI) noise produced by a round jet near a flat plate is described and the resulting model evaluated. The model covers unheated and hot jet conditions (1 less than or equal to jet total temperature ratio less than or equal to 2.7) in the subsonic range (0.5 less than or equal to M(sub a) less than or equal to 0.9), surface lengths 0.6 less than or equal to (axial distance from jet exit to surface trailing edge (inches)/nozzle exit diameter) less than or equal to 10, and surface standoff distances (0 less than or equal to (radial distance from jet lipline to surface (inches)/axial distance from jet exit to surface trailing edge (inches)) less than or equal to 1) using only second-order polynomials to provide predictable behavior. The JSI noise model is combined with an existing jet mixing noise model to produce exhaust noise predictions. Fit quality metrics and comparisons to between the predicted and experimental data indicate that the model is suitable for many system level studies. A first-order correction to the JSI source model that accounts for the effect of nozzle aspect ratio is also explored. This correction is based on changes to the potential core length and frequency scaling associated with rectangular nozzles up to 8:1 aspect ratio. However, more work is needed to refine these findings into a formal model.
Optimal temperature for malaria transmission is dramaticallylower than previously predicted
Mordecai, Eerin A.; Paaijmans, Krijin P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.
2013-01-01
The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.
Optimal temperature for malaria transmission is dramatically lower than previously predicted
Mordecai, Erin A.; Paaijmans, Krijn P.; Johnson, Leah R.; Balzer, Christian; Ben-Horin, Tal; de Moor, Emily; McNally, Amy; Pawar, Samraat; Ryan, Sadie J.; Smith, Thomas C.; Lafferty, Kevin D.
2013-01-01
The ecology of mosquito vectors and malaria parasites affect the incidence, seasonal transmission and geographical range of malaria. Most malaria models to date assume constant or linear responses of mosquito and parasite life-history traits to temperature, predicting optimal transmission at 31 °C. These models are at odds with field observations of transmission dating back nearly a century. We build a model with more realistic ecological assumptions about the thermal physiology of insects. Our model, which includes empirically derived nonlinear thermal responses, predicts optimal malaria transmission at 25 °C (6 °C lower than previous models). Moreover, the model predicts that transmission decreases dramatically at temperatures > 28 °C, altering predictions about how climate change will affect malaria. A large data set on malaria transmission risk in Africa validates both the 25 °C optimum and the decline above 28 °C. Using these more accurate nonlinear thermal-response models will aid in understanding the effects of current and future temperature regimes on disease transmission.
Hydroplaning on multi lane facilities.
DOT National Transportation Integrated Search
2012-11-01
The primary findings of this research can be highlighted as follows. Models that provide estimates of wet weather speed reduction, as well as analytical and empirical methods for the prediction of hydroplaning speeds of trailers and heavy trucks, wer...
Predicting language diversity with complex networks.
Raducha, Tomasz; Gubiec, Tomasz
2018-01-01
We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change.
Development of an empirically based dynamic biomechanical strength model
NASA Technical Reports Server (NTRS)
Pandya, A.; Maida, J.; Aldridge, A.; Hasson, S.; Woolford, B.
1992-01-01
The focus here is on the development of a dynamic strength model for humans. Our model is based on empirical data. The shoulder, elbow, and wrist joints are characterized in terms of maximum isolated torque, position, and velocity in all rotational planes. This information is reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining the torque as a function of position and velocity. The isolated joint torque equations are then used to compute forces resulting from a composite motion, which in this case is a ratchet wrench push and pull operation. What is presented here is a comparison of the computed or predicted results of the model with the actual measured values for the composite motion.
Constructing and Deconstructing Concepts.
Doan, Charles A; Vigo, Ronaldo
2016-09-01
Several empirical investigations have explored whether observers prefer to sort sets of multidimensional stimuli into groups by employing one-dimensional or family-resemblance strategies. Although one-dimensional sorting strategies have been the prevalent finding for these unsupervised classification paradigms, several researchers have provided evidence that the choice of strategy may depend on the particular demands of the task. To account for this disparity, we propose that observers extract relational patterns from stimulus sets that facilitate the development of optimal classification strategies for relegating category membership. We conducted a novel constrained categorization experiment to empirically test this hypothesis by instructing participants to either add or remove objects from presented categorical stimuli. We employed generalized representational information theory (GRIT; Vigo, 2011b , 2013a , 2014 ) and its associated formal models to predict and explain how human beings chose to modify these categorical stimuli. Additionally, we compared model performance to predictions made by a leading prototypicality measure in the literature.
NASA Astrophysics Data System (ADS)
Perren, G.; Vázquez, R. A.; Navone, H.
This paper analyses the reliability of the reddening estimates, extended to the entire sky, from two new Galaxy models built by Amores & Lépine (2005), using as a source of empirical data the database of open star clusters WEBDA. We also used the 100 um maps by Schlegel et al. (1998). It is concluded that the predictions of the Amores & Lépine models have a good correlation with empirical values until a relatively close distance to the Sun, while the Schlegel et al. model do not match the reddening estimation within the Milky Way. FULL TEXT IN SPANISH
ERIC Educational Resources Information Center
Pankau, Brian L.
2009-01-01
This empirical study evaluates the document category prediction effectiveness of Naive Bayes (NB) and K-Nearest Neighbor (KNN) classifier treatments built from different feature selection and machine learning settings and trained and tested against textual corpora of 2300 Gang-Of-Four (GOF) design pattern documents. Analysis of the experiment's…
2010-09-01
22 Figure 23. Flow Type and the reference empirical model ............................................................ 24 Figure 24. Baseline...Trajectory ...................................................................................................... 25 Figure 25. Flow Features Important...94 viii GLOSSARY ACCTE Advanced Ceramic Composites for Turbine Engines AFRL Air Force Research Laboratory AoA Angle of Attack ASE
ERIC Educational Resources Information Center
Frenette, Micheline
Trying to change the predictive rule for the sinking and floating phenomena, students have a great difficulty in understanding density and they are insensitive to empirical counter-examples designed to challenge their own rule. The purpose of this study is to examine the process whereby students from sixth and seventh grades relinquish their…
Predicting Homework Time Management at the Secondary School Level: A Multilevel Analysis
ERIC Educational Resources Information Center
Xu, Jianzhong
2010-01-01
The purpose of this study is to test empirical models of variables posited to predict homework time management at the secondary school level. Student- and class-level predictors of homework time management were analyzed in a survey of 1895 students from 111 classes. Most of the variance in homework time management occurred at the student level,…
Heat Transfer in Adhesively Bonded Honeycomb Core Panels
NASA Technical Reports Server (NTRS)
Daryabeigi, Kamran
2001-01-01
The Swann and Pittman semi-empirical relationship has been used as a standard in aerospace industry to predict the effective thermal conductivity of honeycomb core panels. Recent measurements of the effective thermal conductivity of an adhesively bonded titanium honeycomb core panel using three different techniques, two steady-state and one transient radiant step heating method, at four laboratories varied significantly from each other and from the Swann and Pittman predictions. Average differences between the measurements and the predictions varied between 17 and 61% in the temperature range of 300 to 500 K. In order to determine the correct values of the effective thermal conductivity and determine which set of the measurements or predictions were most accurate, the combined radiation and conduction heat transfer in the honeycomb core panel was modeled using a finite volume numerical formulation. The transient radiant step heating measurements provided the best agreement with the numerical results. It was found that a modification of the Swann and Pittman semi-empirical relationship which incorporated the facesheets and adhesive layers in the thermal model provided satisfactory results. Finally, a parametric study was conducted to investigate the influence of adhesive thickness and thermal conductivity on the overall heat transfer through the panel.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Kuczera, George
2016-04-01
Appropriate representation of residual errors in hydrological modelling is essential for accurate and reliable probabilistic streamflow predictions. In particular, residual errors of hydrological predictions are often heteroscedastic, with large errors associated with high runoff events. Although multiple approaches exist for representing this heteroscedasticity, few if any studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating a range of approaches for representing heteroscedasticity in residual errors. These approaches include the 'direct' weighted least squares approach and 'transformational' approaches, such as logarithmic, Box-Cox (with and without fitting the transformation parameter), logsinh and the inverse transformation. The study reports (1) theoretical comparison of heteroscedasticity approaches, (2) empirical evaluation of heteroscedasticity approaches using a range of multiple catchments / hydrological models / performance metrics and (3) interpretation of empirical results using theory to provide practical guidance on the selection of heteroscedasticity approaches. Importantly, for hydrological practitioners, the results will simplify the choice of approaches to represent heteroscedasticity. This will enhance their ability to provide hydrological probabilistic predictions with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality).
Predicting speech intelligibility in noise for hearing-critical jobs
NASA Astrophysics Data System (ADS)
Soli, Sigfrid D.; Laroche, Chantal; Giguere, Christian
2003-10-01
Many jobs require auditory abilities such as speech communication, sound localization, and sound detection. An employee for whom these abilities are impaired may constitute a safety risk for himself or herself, for fellow workers, and possibly for the general public. A number of methods have been used to predict these abilities from diagnostic measures of hearing (e.g., the pure-tone audiogram); however, these methods have not proved to be sufficiently accurate for predicting performance in the noise environments where hearing-critical jobs are performed. We have taken an alternative and potentially more accurate approach. A direct measure of speech intelligibility in noise, the Hearing in Noise Test (HINT), is instead used to screen individuals. The screening criteria are validated by establishing the empirical relationship between the HINT score and the auditory abilities of the individual, as measured in laboratory recreations of real-world workplace noise environments. The psychometric properties of the HINT enable screening of individuals with an acceptable amount of error. In this presentation, we will describe the predictive model and report the results of field measurements and laboratory studies used to provide empirical validation of the model. [Work supported by Fisheries and Oceans Canada.
Theoretical and Empirical Descriptions of Thermospheric Density
NASA Astrophysics Data System (ADS)
Solomon, S. C.; Qian, L.
2004-12-01
The longest-term and most accurate overall description the density of the upper thermosphere is provided by analysis of change in the ephemeris of Earth-orbiting satellites. Empirical models of the thermosphere developed in part from these measurements can do a reasonable job of describing thermospheric properties on a climatological basis, but the promise of first-principles global general circulation models of the coupled thermosphere/ionosphere system is that a true high-resolution, predictive capability may ultimately be developed for thermospheric density. However, several issues are encountered when attempting to tune such models so that they accurately represent absolute densities as a function of altitude, and their changes on solar-rotational and solar-cycle time scales. Among these are the crucial ones of getting the heating rates (from both solar and auroral sources) right, getting the cooling rates right, and establishing the appropriate boundary conditions. However, there are several ancillary issues as well, such as the problem of registering a pressure-coordinate model onto an altitude scale, and dealing with possible departures from hydrostatic equilibrium in empirical models. Thus, tuning a theoretical model to match empirical climatology may be difficult, even in the absence of high temporal or spatial variation of the energy sources. We will discuss some of the challenges involved, and show comparisons of simulations using the NCAR Thermosphere-Ionosphere-Electrodynamics General Circulation Model (TIE-GCM) to empirical model estimates of neutral thermosphere density and temperature. We will also show some recent simulations using measured solar irradiance from the TIMED/SEE instrument as input to the TIE-GCM.
A global empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, J. M.; van Oldenborgh, G. J.; Hawkins, E.; Suckling, E. B.
2015-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
An empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, Jonathan; van Oldenborgh, Geert Jan; Hawkins, Ed; Suckling, Emma
2016-04-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; Hsi, W; Zhao, J
2016-06-15
Purpose: The Gaussian model for the lateral profiles in air is crucial for an accurate treatment planning system. The field size dependence of dose and the lateral beam profiles of scanning proton and carbon ion beams are due mainly to particles undergoing multiple Coulomb scattering in the beam line components and secondary particles produced by nuclear interactions in the target, both of which depend upon the energy and species of the beam. In this work, lateral profile shape parameters were fitted to measurements of field size dependence dose at the center of field size in air. Methods: Previous studies havemore » employed empirical fits to measured profile data to significantly reduce the QA time required for measurements. From this approach to derive the weight and sigma of lateral profiles in air, empirical model formulations were simulated for three selected energies for both proton and carbon beams. Results: The 20%–80% lateral penumbras predicted by the double model for proton and single model for carbon with the error functions agreed with the measurements within 1 mm. The standard deviation between measured and fitted field size dependence of dose for empirical model in air has a maximum accuracy of 0.74% for proton with double Gaussian, and of 0.57% for carbon with single Gaussian. Conclusion: We have demonstrated that the double Gaussian model of lateral beam profiles is significantly better than the single Gaussian model for proton while a single Gaussian model is sufficient for carbon. The empirical equation may be used to double check the separately obtained model that is currently used by the planning system. The empirical model in air for dose of spot scanning proton and carbon ion beams cannot be directly used for irregular shaped patient fields, but can be to provide reference values for clinical use and quality assurance.« less
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Semi-empirical "leaky-bucket" model of laser-driven x-ray cavities
NASA Astrophysics Data System (ADS)
Moody, J. D.; Landen, O. L.; Divol, L.; LePape, S.; Michel, P.; Town, R. P. J.; Hall, G.; Widmann, K.; Moore, A.
2017-04-01
A semi-empirical analytical model is shown to approximately describe the energy balance in a laser-driven x-ray cavity, such as a hohlraum, for general laser pulse-shapes. Agreement between the model and measurements relies on two scalar parameters, one characterizes the efficiency of x-ray generation for a given laser power and the other represents a characteristic power-loss rate. These parameters, once obtained through estimation or optimization for a particular hohlraum design, can be used to predict either the x-ray flux or the coupled laser power time-history in terms of other quantities for similar hohlraum designs. The value of the model is that it can be used as an approximate "first-look" at hohlraum energy balance prior to a more detailed radiation hydrodynamic modeling.
Then, Amy Y.; Hoenig, John M; Hall, Norman G.; Hewitt, David A.
2015-01-01
Many methods have been developed in the last 70 years to predict the natural mortality rate, M, of a stock based on empirical evidence from comparative life history studies. These indirect or empirical methods are used in most stock assessments to (i) obtain estimates of M in the absence of direct information, (ii) check on the reasonableness of a direct estimate of M, (iii) examine the range of plausible M estimates for the stock under consideration, and (iv) define prior distributions for Bayesian analyses. The two most cited empirical methods have appeared in the literature over 2500 times to date. Despite the importance of these methods, there is no consensus in the literature on how well these methods work in terms of prediction error or how their performance may be ranked. We evaluate estimators based on various combinations of maximum age (tmax), growth parameters, and water temperature by seeing how well they reproduce >200 independent, direct estimates of M. We use tenfold cross-validation to estimate the prediction error of the estimators and to rank their performance. With updated and carefully reviewed data, we conclude that a tmax-based estimator performs the best among all estimators evaluated. The tmax-based estimators in turn perform better than the Alverson–Carney method based on tmax and the von Bertalanffy K coefficient, Pauly’s method based on growth parameters and water temperature and methods based just on K. It is possible to combine two independent methods by computing a weighted mean but the improvement over the tmax-based methods is slight. Based on cross-validation prediction error, model residual patterns, model parsimony, and biological considerations, we recommend the use of a tmax-based estimator (M=4.899tmax−0.916">M=4.899t−0.916maxM=4.899tmax−0.916, prediction error = 0.32) when possible and a growth-based method (M=4.118K0.73L∞−0.33">M=4.118K0.73L−0.33∞M=4.118K0.73L∞−0.33 , prediction error = 0.6, length in cm) otherwise.
Empirical prediction of mechanical properties of flexible pavement through GPR
NASA Astrophysics Data System (ADS)
Bianchini Ciampoli, Luca; Benedetto, Andrea
2017-04-01
To date, it is well known that the frequency of accidental events recorded on a road, is related to the deterioration rate of its pavement. In this sense, the monitoring of the pavement health over a road network is a crucial task for the administrations, to define a priority scale for maintenance works, and accordingly to lower the risk of accidents. Several studies suggest the possibility to employ Ground-penetrating Radar (GPR) to overcome the limits of traditional bearing tests, which due to their low productivity and high costs, can only give a discrete knowledge about the strength of the pavement. This work presents a GPR-based empirical model for the prediction of the bearing capacity of a road pavement, expressed as Young's Modulus. The model exploits the GPR to extract information on the thickness of the base course and the clay content, by referring to the signal velocity and attenuation, respectively. To test the effectiveness of the model, experimental activities have been accounted for. In particular, multi-frequency GPR tests have been performed along road sections of rural roads, composed of a flexible pavement, for a total of 45 Km. As ground-truth, light falling weight deflectometer (LFWD) and Curviameter have been employed. Both the electromagnetic and the mechanical datasets have been properly processed, in order to reduce misinterpretations and to raise the statistical significance of the procedure. Hence, the calibration of the parameters composing the model was run in a subsection, equal to 8% of the total length, randomly selected within the surveyed track. Finally, as validation, the model has been applied to the whole analysed dataset. As a result, the empirical model showed a good effectiveness in predicting the mechanical response of the pavement, with a normalised root mean squared deviation equal to 0.27. Finally, by averaging the measured and predicted mechanical data every 50 m and sorting the results into strength classes, a qualitative approach useful for a visual detection of low-resistance areas has been also proposed. This study demonstrates the efficiency and reliability of GPR in mechanical assessment of flexible pavements. This empirical approach can represent a useful tool for administrations and companies managing road assets, for a non-destructive detection of the areas interested by early stage deterioration processes, and the definition of a priority-based scheduling of maintenance works. Acknowledgements The Authors thank COST, for funding the Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar."
De Vries, Rowen J; Marsh, Steven
2015-11-08
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2-14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997 ± 0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs.
Marsh, Steven
2015-01-01
Internal lead shielding is utilized during superficial electron beam treatments of the head and neck, such as lip carcinoma. Methods for predicting backscattered dose include the use of empirical equations or performing physical measurements. The accuracy of these empirical equations required verification for the local electron beams. In this study, a Monte Carlo model of a Siemens Artiste linac was developed for 6, 9, 12, and 15 MeV electron beams using the EGSnrc MC package. The model was verified against physical measurements to an accuracy of better than 2% and 2 mm. Multiple MC simulations of lead interfaces at different depths, corresponding to mean electron energies in the range of 0.2–14 MeV at the interfaces, were performed to calculate electron backscatter values. The simulated electron backscatter was compared with current empirical equations to ascertain their accuracy. The major finding was that the current set of backscatter equations does not accurately predict electron backscatter, particularly in the lower energies region. A new equation was derived which enables estimation of electron backscatter factor at any depth upstream from the interface for the local treatment machines. The derived equation agreed to within 1.5% of the MC simulated electron backscatter at the lead interface and upstream positions. Verification of the equation was performed by comparing to measurements of the electron backscatter factor using Gafchromic EBT2 film. These results show a mean value of 0.997±0.022 to 1σ of the predicted values of electron backscatter. The new empirical equation presented can accurately estimate electron backscatter factor from lead shielding in the range of 0.2 to 14 MeV for the local linacs. PACS numbers: 87.53.Bn, 87.55.K‐, 87.56.bd PMID:26699566
Toward a Predictive Model of Arctic Coastal Retreat in a Warming Climate, Beaufort Sea, Alaska
2011-09-30
level by waves and surge and tide. Melt rate is governed by an empirically based iceberg melting algorithm that includes explicitly the roles of wave...Thermal erosion of a permafrost coastline: Improving process-based models using time-lapse photography, Arctic Alpine Antarctic Research 43(3): 474
Induced Innovation and Social Inequality: Evidence from Infant Medical Care
ERIC Educational Resources Information Center
Cutler, David M.; Meara, Ellen; Richards-Shubik, Seth
2012-01-01
We develop a model of induced innovation that applies to medical research. Our model yields three empirical predictions. First, initial death rates and subsequent research effort should be positively correlated. Second, research effort should be associated with more rapid mortality declines. Third, as a byproduct of targeting the most common…
Movement behavior explains genetic differentiation in American black bears
Samuel A Cushman; Jesse S. Lewis
2010-01-01
Individual-based landscape genetic analyses provide empirically based models of gene flow. It would be valuable to verify the predictions of these models using independent data of a different type. Analyses using different data sources that produce consistent results provide strong support for the generality of the findings. Mating and dispersal movements are the...
Constrained range expansion and climate change assessments
Yohay Carmel; Curtis H. Flather
2006-01-01
Modeling the future distribution of keystone species has proved to be an important approach to assessing the potential ecological consequences of climate change (Loehle and LeBlanc 1996; Hansen et al. 2001). Predictions of range shifts are typically based on empirical models derived from simple correlative relationships between climatic characteristics of occupied and...
Evolving Approaches and Technologies to Enhance the Role of Ecological Modeling in Decision Making
Eric Gustafson; John Nestler; Louis Gross; Keith M. Reynolds; Daniel Yaussy; Thomas P. Maxwell; Virginia H. Dale
2002-01-01
Understanding the effects of management activities is difficult for natural resource managers and decision makers because ecological systems are highly complex and their behavior is difficult to predict. Furthermore, the empirical studies necessary to illuminate all management questions quickly become logistically complicated and cost prohibitive. Ecological models...
Integrating the Demonstration Orientation and Standards-Based Models of Achievement Goal Theory
ERIC Educational Resources Information Center
Wynne, Heather Marie
2014-01-01
Achievement goal theory and thus, the empirical measures stemming from the research, are currently divided on two conceptual approaches, namely the reason versus aims-based models of achievement goals. The factor structure and predictive utility of goal constructs from the Patterns of Adaptive Learning Strategies (PALS) and the latest two versions…
Sample and population exponents of generalized Taylor's law.
Giometto, Andrea; Formentin, Marco; Rinaldo, Andrea; Cohen, Joel E; Maritan, Amos
2015-06-23
Taylor's law (TL) states that the variance V of a nonnegative random variable is a power function of its mean M; i.e., V = aM(b). TL has been verified extensively in ecology, where it applies to population abundance, physics, and other natural sciences. Its ubiquitous empirical verification suggests a context-independent mechanism. Sample exponents b measured empirically via the scaling of sample mean and variance typically cluster around the value b = 2. Some theoretical models of population growth, however, predict a broad range of values for the population exponent b pertaining to the mean and variance of population density, depending on details of the growth process. Is the widely reported sample exponent b ≃ 2 the result of ecological processes or could it be a statistical artifact? Here, we apply large deviations theory and finite-sample arguments to show exactly that in a broad class of growth models the sample exponent is b ≃ 2 regardless of the underlying population exponent. We derive a generalized TL in terms of sample and population exponents b(jk) for the scaling of the kth vs. the jth cumulants. The sample exponent b(jk) depends predictably on the number of samples and for finite samples we obtain b(jk) ≃ k = j asymptotically in time, a prediction that we verify in two empirical examples. Thus, the sample exponent b ≃ 2 may indeed be a statistical artifact and not dependent on population dynamics under conditions that we specify exactly. Given the broad class of models investigated, our results apply to many fields where TL is used although inadequately understood.
Salience-Based Selection: Attentional Capture by Distractors Less Salient Than the Target
Goschy, Harriet; Müller, Hermann Joseph
2013-01-01
Current accounts of attentional capture predict the most salient stimulus to be invariably selected first. However, existing salience and visual search models assume noise in the map computation or selection process. Consequently, they predict the first selection to be stochastically dependent on salience, implying that attention could even be captured first by the second most salient (instead of the most salient) stimulus in the field. Yet, capture by less salient distractors has not been reported and salience-based selection accounts claim that the distractor has to be more salient in order to capture attention. We tested this prediction using an empirical and modeling approach of the visual search distractor paradigm. For the empirical part, we manipulated salience of target and distractor parametrically and measured reaction time interference when a distractor was present compared to absent. Reaction time interference was strongly correlated with distractor salience relative to the target. Moreover, even distractors less salient than the target captured attention, as measured by reaction time interference and oculomotor capture. In the modeling part, we simulated first selection in the distractor paradigm using behavioral measures of salience and considering the time course of selection including noise. We were able to replicate the result pattern we obtained in the empirical part. We conclude that each salience value follows a specific selection time distribution and attentional capture occurs when the selection time distributions of target and distractor overlap. Hence, selection is stochastic in nature and attentional capture occurs with a certain probability depending on relative salience. PMID:23382820
Prediction of episodic acidification in North-eastern USA: An empirical/mechanistic approach
Davies, T.D.; Tranter, M.; Wigington, P.J.; Eshleman, K.N.; Peters, N.E.; Van Sickle, J.; DeWalle, David R.; Murdoch, Peter S.
1999-01-01
Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the North-eastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variable. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess 'chemically new' and 'chemically old' water sources during acidification episodes.Observations from the US Environmental Protection Agency's Episodic Response Project (ERP) in the Northeastern United States are used to develop an empirical/mechanistic scheme for prediction of the minimum values of acid neutralizing capacity (ANC) during episodes. An acidification episode is defined as a hydrological event during which ANC decreases. The pre-episode ANC is used to index the antecedent condition, and the stream flow increase reflects how much the relative contributions of sources of waters change during the episode. As much as 92% of the total variation in the minimum ANC in individual catchments can be explained (with levels of explanation >70% for nine of the 13 streams) by a multiple linear regression model that includes pre-episode ANC and change in discharge as independent variables. The predictive scheme is demonstrated to be regionally robust, with the regional variance explained ranging from 77 to 83%. The scheme is not successful for each ERP stream, and reasons are suggested for the individual failures. The potential for applying the predictive scheme to other watersheds is demonstrated by testing the model with data from the Panola Mountain Research Watershed in the South-eastern United States, where the variance explained by the model was 74%. The model can also be utilized to assess `chemically new' and `chemically old' water sources during acidification episodes.
NASA Astrophysics Data System (ADS)
Song, Lanlan
2017-04-01
Nitrous oxide is much more potent greenhouse gas than carbon dioxide. However, the estimation of N2O flux is usually clouded with uncertainty, mainly due to high spatial and temporal variations. This hampers the development of general mechanistic models for N2O emission as well, as most previously developed models were empirical or exhibited low predictability with numerous assumptions. In this study, we tested General Regression Neural Networks (GRNN) as an alternative to classic empirical models for simulating N2O emission in riparian zones of Reservoirs. GRNN and nonlinear regression (NLR) were applied to estimate the N2O flux of 1-year observations in riparian zones of Three Gorge Reservoir. NLR resulted in lower prediction power and higher residuals compared to GRNN. Although nonlinear regression model estimated similar average values of N2O, it could not capture the fluctuation patterns accurately. In contrast, GRNN model achieved a fairly high predictability, with an R2 of 0.59 for model validation, 0.77 for model calibration (training), and a low root mean square error (RMSE), indicating a high capacity to simulate the dynamics of N2O flux. According to a sensitivity analysis of the GRNN, nonlinear relationships between input variables and N2O flux were well explained. Our results suggest that the GRNN developed in this study has a greater performance in simulating variations in N2O flux than nonlinear regressions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mercer, D.E.
The objectives are threefold: (1) to perform an analytical survey of household production theory as it relates to natural-resource problems in less-developed countries, (2) to develop a household production model of fuelwood decision making, (3) to derive a theoretical framework for travel-cost demand studies of international nature tourism. The model of household fuelwood decision making provides a rich array of implications and predictions for empirical analysis. For example, it is shown that fuelwood and modern fuels may be either substitutes or complements depending on the interaction of the gross-substitution and income-expansion effects. Therefore, empirical analysis should precede adoption of anymore » inter-fuel substitution policies such as subsidizing kerosene. The fuelwood model also provides a framework for analyzing the conditions and factors determining entry and exit by households into the wood-burning subpopulation, a key for designing optimal household energy policies in the Third World. The international nature tourism travel cost model predicts that the demand for nature tourism is an aggregate of the demand for the individual activities undertaken during the trip.« less
A computational efficient modelling of laminar separation bubbles
NASA Technical Reports Server (NTRS)
Dini, Paolo; Maughmer, Mark D.
1990-01-01
In predicting the aerodynamic characteristics of airfoils operating at low Reynolds numbers, it is often important to account for the effects of laminar (transitional) separation bubbles. Previous approaches to the modelling of this viscous phenomenon range from fast but sometimes unreliable empirical correlations for the length of the bubble and the associated increase in momentum thickness, to more accurate but significantly slower displacement-thickness iteration methods employing inverse boundary-layer formulations in the separated regions. Since the penalty in computational time associated with the more general methods is unacceptable for airfoil design applications, use of an accurate yet computationally efficient model is highly desirable. To this end, a semi-empirical bubble model was developed and incorporated into the Eppler and Somers airfoil design and analysis program. The generality and the efficiency was achieved by successfully approximating the local viscous/inviscid interaction, the transition location, and the turbulent reattachment process within the framework of an integral boundary-layer method. Comparisons of the predicted aerodynamic characteristics with experimental measurements for several airfoils show excellent and consistent agreement for Reynolds numbers from 2,000,000 down to 100,000.
Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.
Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H
2008-05-01
The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.
2016-01-01
Objectives Recognizing the inherent variability of drug-related behaviors, this study develops an empirically-driven and holistic model of drug-related behavior during adolescence using factor analysis to simultaneously model multiple drug behaviors. Methods The factor analytic model uncovers latent dimensions of drug-related behaviors, rather than patterns of individuals. These latent dimensions are treated as empirical typologies which are then used to predict an individual’s number of arrests accrued at multiple phases of the life course. The data are robust enough to simultaneously capture drug behavior measures typically considered in isolation in the literature, and to allow for behavior to change and evolve over the period of adolescence. Results Results show that factor analysis is capable of developing highly descriptive patterns of drug offending, and that these patterns have great utility in predicting arrests. Results further demonstrate that while drug behavior patterns are predictive of arrests at the end of adolescence for both males and females, the impacts on arrests are longer lasting for females. Conclusions The various facets of drug behaviors have been a long-time concern of criminological research. However, the ability to model multiple behaviors simultaneously is often constrained by data that do not measure the constructs fully. Factor analysis is shown to be a useful technique for modeling adolescent drug involvement patterns in a way that accounts for the multitude and variability of possible behaviors, and in predicting future negative life outcomes, such as arrests. PMID:28435183
NASA Astrophysics Data System (ADS)
Zhang, Wei
2011-07-01
The longitudinal dispersion coefficient, DL, is a fundamental parameter of longitudinal solute transport models: the advection-dispersion (AD) model and various deadzone models. Since DL cannot be measured directly, and since its calibration using tracer test data is quite expensive and not always available, researchers have developed various methods, theoretical or empirical, for estimating DL by easier available cross-sectional hydraulic measurements (i.e., the transverse velocity profile, etc.). However, for known and unknown reasons, DL cannot be satisfactorily predicted using these theoretical/empirical formulae. Either there is very large prediction error for theoretical methods, or there is a lack of generality for the empirical formulae. Here, numerical experiments using Mike21, a software package that implements one of the most rigorous two-dimensional hydrodynamic and solute transport equations, for longitudinal solute transport in hypothetical streams, are presented. An analysis of the evolution of simulated solute clouds indicates that the two fundamental assumptions in Fischer's longitudinal transport analysis may be not reasonable. The transverse solute concentration distribution, and hence the longitudinal transport appears to be controlled by a dimensionless number ?, where Q is the average volumetric flowrate, Dt is a cross-sectional average transverse dispersion coefficient, and W is channel flow width. A simple empirical ? relationship may be established. Analysis and a revision of Fischer's theoretical formula suggest that ɛ influences the efficiency of transverse mixing and hence has restraining effect on longitudinal spreading. The findings presented here would improve and expand our understanding of longitudinal solute transport in open channel flow.
NASA Astrophysics Data System (ADS)
Pietrella, M.
2012-02-01
A short-term ionospheric forecasting empirical regional model (IFERM) has been developed to predict the state of the critical frequency of the F2 layer (foF2) under different geomagnetic conditions. IFERM is based on 13 short term ionospheric forecasting empirical local models (IFELM) developed to predict foF2 at 13 ionospheric observatories scattered around the European area. The forecasting procedures were developed by taking into account, hourly measurements of foF2, hourly quiet-time reference values of foF2 (foF2QT), and the hourly time-weighted accumulation series derived from the geomagnetic planetary index ap, (ap(τ)), for each observatory. Under the assumption that the ionospheric disturbance index ln(foF2/foF2QT) is correlated to the integrated geomagnetic disturbance index ap(τ), a set of statistically significant regression coefficients were established for each observatory, over 12 months, over 24 h, and under 3 different ranges of geomagnetic activity. This data was then used as input to compute short-term ionospheric forecasting of foF2 at the 13 local stations under consideration. The empirical storm-time ionospheric correction model (STORM) was used to predict foF2 in two different ways: scaling both the hourly median prediction provided by IRI (STORM_foF2MED,IRI model), and the foF2QT values (STORM_foF2QT model) from each local station. The comparison between the performance of STORM_foF2MED,IRI, STORM_foF2QT, IFELM, and the foF2QT values, was made on the basis of root mean square deviation (r.m.s.) for a large number of periods characterized by moderate, disturbed, and very disturbed geomagnetic activity. The results showed that the 13 IFELM perform much better than STORM_foF2,sub>MED,IRI and STORM_foF2QT especially in the eastern part of the European area during the summer months (May, June, July, and August) and equinoctial months (March, April, September, and October) under disturbed and very disturbed geomagnetic conditions, respectively. The performance of IFELM is also very good in the western and central part of the Europe during the summer months under disturbed geomagnetic conditions. STORM_foF2MED,IRI performs particularly well in central Europe during the equinoctial months under moderate geomagnetic conditions and during the summer months under very disturbed geomagnetic conditions. The forecasting maps generated by IFERM on the basis of the results provided by the 13 IFELM, show very large areas located at middle-high and high latitudes where the foF2 predictions quite faithfully match the foF2 measurements, and consequently IFERM can be used for generating short-term forecasting maps of foF2 (up to 3 h ahead) over the European area.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
Culture and Social Relationship as Factors of Affecting Communicative Non-verbal Behaviors
NASA Astrophysics Data System (ADS)
Akhter Lipi, Afia; Nakano, Yukiko; Rehm, Mathias
The goal of this paper is to link a bridge between social relationship and cultural variation to predict conversants' non-verbal behaviors. This idea serves as a basis of establishing a parameter based socio-cultural model, which determines non-verbal expressive parameters that specify the shapes of agent's nonverbal behaviors in HAI. As the first step, a comparative corpus analysis is done for two cultures in two specific social relationships. Next, by integrating the cultural and social parameters factors with the empirical data from corpus analysis, we establish a model that predicts posture. The predictions from our model successfully demonstrate that both cultural background and social relationship moderate communicative non-verbal behaviors.
Lance A. Vickers; David R. Larsen; Daniel C. Dey; Benjamin O. Knapp; John M. Kabrick
2017-01-01
Predicting the effects of silvicultural choices on regeneration has been difficult with the tools available to foresters. In an effort to improve this, we developed a collection of reproduction establishment models based on stand development hypotheses and parameterized with empirical data for several species in the Missouri Ozarks. These models estimate third-year...
ERIC Educational Resources Information Center
Pavel, D. Michael
This paper on postsecondary outcomes illustrates a technique to determine whether or not mainstream models are appropriate for predicting educational outcomes of American Indians (AIs) and Alaskan Native (ANs). It introduces a prominent statistical procedure to assess models with empirical data and shows how the results can have implications for…
Using landscape analysis to assess and model tsunami damage in Aceh province, Sumatra
Louis R. Iverson; Anantha Prasad
2007-01-01
The nearly unprecedented loss of life resulting from the earthquake and tsunami of December 26,2004, was greatest in the province of Aceh, Sumatra (Indonesia). We evaluated tsunami damage and built empirical vulnerability models of damage/no damage based on elevation, distance from shore, vegetation, and exposure. We found that highly predictive models are possible and...
Canadian Field Soils IV: Modeling Thermal Conductivity at Dryness and Saturation
NASA Astrophysics Data System (ADS)
Tarnawski, V. R.; McCombie, M. L.; Leong, W. H.; Coppa, P.; Corasaniti, S.; Bovesecchi, G.
2018-03-01
The thermal conductivity data of 40 Canadian soils at dryness (λ _{dry}) and at full saturation (λ _{sat}) were used to verify 13 predictive models, i.e., four mechanistic, four semi-empirical and five empirical equations. The performance of each model, for λ _{dry} and λ _{sat}, was evaluated using a standard deviation ( SD) formula. Among the mechanistic models applied to dry soils, the closest λ _{dry} estimates were obtained by MaxRTCM (it{SD} = ± 0.018 Wm^{-1}\\cdot K^{-1}), followed by de Vries and a series-parallel model (S-{\\vert }{\\vert }). Among the semi-empirical equations (deVries-ave, Advanced Geometric Mean Model (A-GMM), Chaudhary and Bhandari (C-B) and Chen's equation), the closest λ _{dry} estimates were obtained by the C-B model (± 0.022 Wm^{-1}\\cdot K^{-1}). Among the empirical equations, the top λ _{dry} estimates were given by CDry-40 (± 0.021 Wm^{-1}\\cdot K^{-1} and ± 0.018 Wm^{-1}\\cdot K^{-1} for18-coarse and 22-fine soils, respectively). In addition, λ _{dry} and λ _{sat} models were applied to the λ _{sat} database of 21 other soils. From all the models tested, only the maxRTCM and the CDry-40 models provided the closest λ _{dry} estimates for the 40 Canadian soils as well as the 21 soils. The best λ _{sat} estimates for the 40-Canadian soils and the 21 soils were given by the A-GMM and the S-{\\vert }{\\vert } model.
Nevers, M.B.; Whitman, R.L.
2008-01-01
To understand the fate and movement of Escherichia coli in beach water, numerous modeling studies have been undertaken including mechanistic predictions of currents and plumes and empirical modeling based on hydrometeorological variables. Most approaches are limited in scope by nearshore currents or physical obstacles and data limitations; few examine the issue from a larger spatial scale. Given the similarities between variables typically included in these models, we attempted to take a broader view of E. coli fluctuations by simultaneously examining twelve beaches along 35 km of Indiana's Lake Michigan coastline that includes five point-source outfalls. The beaches had similar E. coli fluctuations, and a best-fit empirical model included two variables: wave height and an interactive term comprised of wind direction and creek turbidity. Individual beach R2 was 0.32-0.50. Data training-set results were comparable to validation results (R2 = 0.48). Amount of variation explained by the model was similar to previous reports for individual beaches. By extending the modeling approach to include more coastline distance, broader-scale spatial and temporal changes in bacteria concentrations and the influencing factors can be characterized. ?? 2008 American Chemical Society.
Modeling and prediction of ionospheric scintillation
NASA Technical Reports Server (NTRS)
Fremouw, E. J.
1974-01-01
Scintillation modeling performed thus far is based on the theory of diffraction by a weakly modulating phase screen developed by Briggs and Parkin (1963). Shortcomings of the existing empirical model for the scintillation index are discussed together with questions of channel modeling, giving attention to the needs of the communication engineers. It is pointed out that much improved scintillation index models may be available in a matter of a year or so.
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (ISET). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the ISET in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P. PMID:22072945
Souza, Erica Silva; Zaramello, Laize; Kuhnen, Carlos Alberto; Junkes, Berenice da Silva; Yunes, Rosendo Augusto; Heinzen, Vilma Edite Fonseca
2011-01-01
A new possibility for estimating the octanol/water coefficient (log P) was investigated using only one descriptor, the semi-empirical electrotopological index (I(SET)). The predictability of four octanol/water partition coefficient (log P) calculation models was compared using a set of 131 aliphatic organic compounds from five different classes. Log P values were calculated employing atomic-contribution methods, as in the Ghose/Crippen approach and its later refinement, AlogP; using fragmental methods through the ClogP method; and employing an approach considering the whole molecule using topological indices with the MlogP method. The efficiency and the applicability of the I(SET) in terms of calculating log P were demonstrated through good statistical quality (r > 0.99; s < 0.18), high internal stability and good predictive ability for an external group of compounds in the same order as the widely used models based on the fragmental method, ClogP, and the atomic contribution method, AlogP, which are among the most used methods of predicting log P.
NASA Astrophysics Data System (ADS)
Almatroushi, H. R.; Lootah, F. H.; Deighan, J.; Fillingim, M. O.; Jain, S.; Bougher, S. W.; England, S.; Schneider, N. M.
2017-12-01
This research focuses on developing empirical and theoretical models for OI 135.6 nm and CO 4PG band system FUV dayglow emissions in the Martian thermosphere as predicted to be seen from the Emirates Mars Ultraviolet Spectrometer (EMUS), one of the three scientific instruments aboard the Emirates Mars Mission (EMM) to be launched in 2020. These models will aid in simulating accurate disk radiances which will be utilized as an input to an EMUS instrument simulator. The developed zonally averaged empirical models are based on FUV data from the IUVS instrument onboard the MAVEN mission, while the theoretical models are based on a basic Chapman profile. The models calculate the brightness (B) of those emissions taking into consideration observation geometry parameters such as emission angle (EA), solar zenith angle (SZA) and planet distance from the sun (Ds). Specifically, the empirical models takes a general form of Bn=A*cos(SZA)n/cos(EA)m , where Bn is the normalized brightness value of an emission feature, and A, n, and m are positive constant values. The model form shows that the brightness has a positive correlation with EA and a negative correlation with SZA. A comparison of both models are explained in this research while examining full Mars and half Mars disk images generated using geometry code specially developed for the EMUS instrument. Sensitivity analyses have also been conducted for the theoretical modeling to observe the contributions of electron impact on atomic oxygen and CO2 to the brightness of OI 135.6nm, in addition to the effect of electron temperature on the CO2± dissociative recombination contribution to the CO 4PG band system.
Modeling for Battery Prognostics
NASA Technical Reports Server (NTRS)
Kulkarni, Chetan S.; Goebel, Kai; Khasin, Michael; Hogge, Edward; Quach, Patrick
2017-01-01
For any battery-powered vehicles (be it unmanned aerial vehicles, small passenger aircraft, or assets in exoplanetary operations) to operate at maximum efficiency and reliability, it is critical to monitor battery health as well performance and to predict end of discharge (EOD) and end of useful life (EOL). To fulfil these needs, it is important to capture the battery's inherent characteristics as well as operational knowledge in the form of models that can be used by monitoring, diagnostic, and prognostic algorithms. Several battery modeling methodologies have been developed in last few years as the understanding of underlying electrochemical mechanics has been advancing. The models can generally be classified as empirical models, electrochemical engineering models, multi-physics models, and molecular/atomist. Empirical models are based on fitting certain functions to past experimental data, without making use of any physicochemical principles. Electrical circuit equivalent models are an example of such empirical models. Electrochemical engineering models are typically continuum models that include electrochemical kinetics and transport phenomena. Each model has its advantages and disadvantages. The former type of model has the advantage of being computationally efficient, but has limited accuracy and robustness, due to the approximations used in developed model, and as a result of such approximations, cannot represent aging well. The latter type of model has the advantage of being very accurate, but is often computationally inefficient, having to solve complex sets of partial differential equations, and thus not suited well for online prognostic applications. In addition both multi-physics and atomist models are computationally expensive hence are even less suited to online application An electrochemistry-based model of Li-ion batteries has been developed, that captures crucial electrochemical processes, captures effects of aging, is computationally efficient, and is of suitable accuracy for reliable EOD prediction in a variety of operational profiles. The model can be considered an electrochemical engineering model, but unlike most such models found in the literature, certain approximations are done that allow to retain computational efficiency for online implementation of the model. Although the focus here is on Li-ion batteries, the model is quite general and can be applied to different chemistries through a change of model parameter values. Progress on model development, providing model validation results and EOD prediction results is being presented.
Steenbeek, Henderien; van der Aalsvoort, Diny; van Geert, Paul
2014-07-01
This study was focused on the role of gender-related differences in collaborative play, by examining properties of play as a complex system, and by using micro-genetic analysis techniques. A complex dynamic systems model of dyadic play was used to make predictions with regard to duration and number of contact-episodes during play of same-sex dyads, both on the micro- (i.e., per individual session), meso- (i.e., in smoothed data), and macro time scale (i.e., the change over six consecutive play sessions). The empirical data came from a study that examined the collaborative play skills of children who experienced six twenty minute play sessions within a three week period of time. Monte Carlo permutation analyses were used to compare model predictions and empirical data. The findings point to strongly asymmetric distributions in the duration and number of contact episodes in all dyads over the six sessions, as a direct consequence of the underlying dynamics of the play system. The model prediction that girls-dyads would show longer contact episodes than boys-dyads was confirmed, but the prediction regarding the difference in number of peaks was not confirmed. In addition, the majority of the model predictions regarding changes over the course of six sessions were consistent with the data. That is, the average duration and the maximum duration of contact-episodes increases both in boys-dyads and girls-dyads, but differences occur in the strength of the increase. Contrary to expectation, the number of contact-episodes decreases both in boys-dyads and in girls-dyads.
Impact of tidal density variability on orbital and reentry predictions
NASA Astrophysics Data System (ADS)
Leonard, J. M.; Forbes, J. M.; Born, G. H.
2012-12-01
Since the first satellites entered Earth orbit in the late 1950's and early 1960's, the influences of solar and geomagnetic variability on the satellite drag environment have been studied, and parameterized in empirical density models with increasing sophistication. However, only within the past 5 years has the realization emerged that "troposphere weather" contributes significantly to the "space weather" of the thermosphere, especially during solar minimum conditions. Much of the attendant variability is attributable to upward-propagating solar tides excited by latent heating due to deep tropical convection, and solar radiation absorption primarily by water vapor and ozone in the stratosphere and mesosphere, respectively. We know that this tidal spectrum significantly modifies the orbital (>200 km) and reentry (60-150 km) drag environments, and that these tidal components induce longitude variability not yet emulated in empirical density models. Yet, current requirements for improvements in orbital prediction make clear that further refinements to density models are needed. In this paper, the operational consequences of longitude-dependent tides are quantitatively assessed through a series of orbital and reentry predictions. We find that in-track prediction differences incurred by tidal effects are typically of order 200 ± 100 m for satellites in 400-km circular orbits and 15 ± 10 km for satellites in 200-km circular orbits for a 24-hour prediction. For an initial 200-km circular orbit, surface impact differences of order 15° ± 15° latitude are incurred. For operational problems with similar accuracy needs, a density model that includes a climatological representation of longitude-dependent tides should significantly reduce errors due to this source.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
Predicting Low Accrual in the National Cancer Institute’s Cooperative Group Clinical Trials
Bennette, Caroline S.; Ramsey, Scott D.; McDermott, Cara L.; Carlson, Josh J.; Basu, Anirban; Veenstra, David L.
2016-01-01
Background: The extent to which trial-level factors differentially influence accrual to trials has not been comprehensively studied. Our objective was to evaluate the empirical relationship and predictive properties of putative risk factors for low accrual in the National Cancer Institute’s (NCI’s) Cooperative Group Program, now the National Clinical Trials Network (NCTN). Methods: Data from 787 phase II/III adult NCTN-sponsored trials launched between 2000 and 2011 were used to develop a logistic regression model to predict low accrual, defined as trials that closed with or were accruing at less than 50% of target; 46 trials opened between 2012 and 2013 were used for prospective validation. Candidate predictors were identified from a literature review and expert interviews; final predictors were selected using stepwise regression. Model performance was evaluated by calibration and discrimination via the area under the curve (AUC). All statistical tests were two-sided. Results: Eighteen percent (n = 145) of NCTN-sponsored trials closed with low accrual or were accruing at less than 50% of target three years or more after initiation. A multivariable model of twelve trial-level risk factors had good calibration and discrimination for predicting trials with low accrual (AUC in trials launched 2000–2011 = 0.739, 95% confidence interval [CI] = 0.696 to 0.783]; 2012–2013: AUC = 0.732, 95% CI = 0.547 to 0.917). Results were robust to different definitions of low accrual and predictor selection strategies. Conclusions: We identified multiple characteristics of NCTN-sponsored trials associated with low accrual, several of which have not been previously empirically described, and developed a prediction model that can provide a useful estimate of accrual risk based on these factors. Future work should assess the role of such prediction tools in trial design and prioritization decisions. PMID:26714555
Empirical prediction of the onset dates of South China Sea summer monsoon
NASA Astrophysics Data System (ADS)
Zhu, Zhiwei; Li, Tim
2017-03-01
The onset of South China Sea summer monsoon (SCSSM) signifies the commencement of the wet season over East Asia. Predicting the SCSSM onset date is of significant importance. In this study, we establish two different statistical models, namely the physical-empirical model (PEM) and the spatial-temporal projection model (STPM) to predict the SCSSM onset. The PEM is constructed from the seasonal prediction perspective. Observational diagnoses reveal that the early onset of the SCSSM is preceded by (a) a warming tendency in middle and lower troposphere (850-500 hPa) over central Siberia from January to March, (b) a La Niña-like zonal dipole sea surface temperature pattern over the tropical Pacific in March, and (c) a dipole sea level pressure pattern with negative center in subtropics and positive center over high latitude of Southern Hemisphere in January. The PEM built on these predictors achieves a cross-validated reforecast temporal correlation coefficient (TCC) skill of 0.84 for the period of 1979-2004, and an independent forecast TCC skill of 0.72 for the period 2005-2014. The STPM is built on the extended-range forecast perspective. Pentad data are used to predict a zonal wind index over the South China Sea region. Similar to PEM, the STPM is constructed using 1979-2004 data. Based on the forecasted zonal wind index, the independent forecast of the SCSSM onset dates achieves a TCC skill of 0.90 for 2005-2014. The STPM provides more detailed information for the intraseasonal evolution during the period of the SCSSM onset (pentad 25-35). The two models proposed herein are expected to facilitate the real-time prediction of the SCSSM onset.
Thermospheric mass density model error variance as a function of time scale
NASA Astrophysics Data System (ADS)
Emmert, J. T.; Sutton, E. K.
2017-12-01
In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).
Two models for evaluating landslide hazards
Davis, J.C.; Chung, C.-J.; Ohlmacher, G.C.
2006-01-01
Two alternative procedures for estimating landslide hazards were evaluated using data on topographic digital elevation models (DEMs) and bedrock lithologies in an area adjacent to the Missouri River in Atchison County, Kansas, USA. The two procedures are based on the likelihood ratio model but utilize different assumptions. The empirical likelihood ratio model is based on non-parametric empirical univariate frequency distribution functions under an assumption of conditional independence while the multivariate logistic discriminant model assumes that likelihood ratios can be expressed in terms of logistic functions. The relative hazards of occurrence of landslides were estimated by an empirical likelihood ratio model and by multivariate logistic discriminant analysis. Predictor variables consisted of grids containing topographic elevations, slope angles, and slope aspects calculated from a 30-m DEM. An integer grid of coded bedrock lithologies taken from digitized geologic maps was also used as a predictor variable. Both statistical models yield relative estimates in the form of the proportion of total map area predicted to already contain or to be the site of future landslides. The stabilities of estimates were checked by cross-validation of results from random subsamples, using each of the two procedures. Cell-by-cell comparisons of hazard maps made by the two models show that the two sets of estimates are virtually identical. This suggests that the empirical likelihood ratio and the logistic discriminant analysis models are robust with respect to the conditional independent assumption and the logistic function assumption, respectively, and that either model can be used successfully to evaluate landslide hazards. ?? 2006.
Zhou, Qingping; Jiang, Haiyan; Wang, Jianzhou; Zhou, Jianling
2014-10-15
Exposure to high concentrations of fine particulate matter (PM₂.₅) can cause serious health problems because PM₂.₅ contains microscopic solid or liquid droplets that are sufficiently small to be ingested deep into human lungs. Thus, daily prediction of PM₂.₅ levels is notably important for regulatory plans that inform the public and restrict social activities in advance when harmful episodes are foreseen. A hybrid EEMD-GRNN (ensemble empirical mode decomposition-general regression neural network) model based on data preprocessing and analysis is firstly proposed in this paper for one-day-ahead prediction of PM₂.₅ concentrations. The EEMD part is utilized to decompose original PM₂.₅ data into several intrinsic mode functions (IMFs), while the GRNN part is used for the prediction of each IMF. The hybrid EEMD-GRNN model is trained using input variables obtained from principal component regression (PCR) model to remove redundancy. These input variables accurately and succinctly reflect the relationships between PM₂.₅ and both air quality and meteorological data. The model is trained with data from January 1 to November 1, 2013 and is validated with data from November 2 to November 21, 2013 in Xi'an Province, China. The experimental results show that the developed hybrid EEMD-GRNN model outperforms a single GRNN model without EEMD, a multiple linear regression (MLR) model, a PCR model, and a traditional autoregressive integrated moving average (ARIMA) model. The hybrid model with fast and accurate results can be used to develop rapid air quality warning systems. Copyright © 2014 Elsevier B.V. All rights reserved.
Machine learning approaches for estimation of prediction interval for the model output.
Shrestha, Durga L; Solomatine, Dimitri P
2006-03-01
A novel method for estimating prediction uncertainty using machine learning techniques is presented. Uncertainty is expressed in the form of the two quantiles (constituting the prediction interval) of the underlying distribution of prediction errors. The idea is to partition the input space into different zones or clusters having similar model errors using fuzzy c-means clustering. The prediction interval is constructed for each cluster on the basis of empirical distributions of the errors associated with all instances belonging to the cluster under consideration and propagated from each cluster to the examples according to their membership grades in each cluster. Then a regression model is built for in-sample data using computed prediction limits as targets, and finally, this model is applied to estimate the prediction intervals (limits) for out-of-sample data. The method was tested on artificial and real hydrologic data sets using various machine learning techniques. Preliminary results show that the method is superior to other methods estimating the prediction interval. A new method for evaluating performance for estimating prediction interval is proposed as well.
Prediction of winter precipitation over northwest India using ocean heat fluxes
NASA Astrophysics Data System (ADS)
Nageswararao, M. M.; Mohanty, U. C.; Osuri, Krishna K.; Ramakrishna, S. S. V. S.
2016-10-01
The winter precipitation (December-February) over northwest India (NWI) is highly variable in terms of time and space. The maximum precipitation occurs over the Himalaya region and decreases towards south of NWI. The winter precipitation is important for water resources and agriculture sectors over the region and for the economy of the country. It is an exigent task to the scientific community to provide a seasonal outlook for the regional scale precipitation. The oceanic heat fluxes are known to have a strong linkage with the ocean and atmosphere. Henceforth, in this study, we obtained the relationship of NWI winter precipitation with total downward ocean heat fluxes at the global ocean surface, 15 regions with significant correlations are identified from August to November at 90 % confidence level. These strong relations encourage developing an empirical model for predicting winter precipitation over NWI. The multiple linear regression (MLR) and principal component regression (PCR) models are developed and evaluated using leave-one-out cross-validation. The developed regression models are able to predict the winter precipitation patterns over NWI with significant (99 % confidence level) index of agreement and correlations. Moreover, these models capture the signals of extremes, but could not reach the peaks (excess and deficit) of the observations. PCR performs better than MLR for predicting winter precipitation over NWI. Therefore, the total downward ocean heat fluxes at surface from August to November are having a significant impact on seasonal winter precipitation over the NWI. It concludes that these interrelationships are more useful for the development of empirical models and feasible to predict the winter precipitation over NWI with sufficient lead-time (in advance) for various risk management sectors.
Scaling rules for the final decline to extinction
Griffen, Blaine D.; Drake, John M.
2009-01-01
Space–time scaling rules are ubiquitous in ecological phenomena. Current theory postulates three scaling rules that describe the duration of a population's final decline to extinction, although these predictions have not previously been empirically confirmed. We examine these scaling rules across a broader set of conditions, including a wide range of density-dependent patterns in the underlying population dynamics. We then report on tests of these predictions from experiments using the cladoceran Daphnia magna as a model. Our results support two predictions that: (i) the duration of population persistence is much greater than the duration of the final decline to extinction and (ii) the duration of the final decline to extinction increases with the logarithm of the population's estimated carrying capacity. However, our results do not support a third prediction that the duration of the final decline scales inversely with population growth rate. These findings not only support the current standard theory of population extinction but also introduce new empirical anomalies awaiting a theoretical explanation. PMID:19141422
Northward migration under a changing climate: a case study of blackgum (Nyssa Sylvatica)
Johanna Desprez; Basil V. Iannone III; Peilin Yang; Christopher M. Oswalt; Songlin Fei
2014-01-01
Species are predicted to shift their distribution ranges in response to climate change. Region-wide, empirically-based studies, however, are still limited to support these predictions. We used a model tree species, blackgum (Nyssa sylvatica), to study climate-induced range shift. Data collected from two separate sampling periods (1980s and 2007) by the USDAâs Forestry...
Modeling behavioral thermoregulation in a climate change sentinel.
Moyer-Horner, Lucas; Mathewson, Paul D; Jones, Gavin M; Kearney, Michael R; Porter, Warren P
2015-12-01
When possible, many species will shift in elevation or latitude in response to rising temperatures. However, before such shifts occur, individuals will first tolerate environmental change and then modify their behavior to maintain heat balance. Behavioral thermoregulation allows animals a range of climatic tolerances and makes predicting geographic responses under future warming scenarios challenging. Because behavioral modification may reduce an individual's fecundity by, for example, limiting foraging time and thus caloric intake, we must consider the range of behavioral options available for thermoregulation to accurately predict climate change impacts on individual species. To date, few studies have identified mechanistic links between an organism's daily activities and the need to thermoregulate. We used a biophysical model, Niche Mapper, to mechanistically model microclimate conditions and thermoregulatory behavior for a temperature-sensitive mammal, the American pika (Ochotona princeps). Niche Mapper accurately simulated microclimate conditions, as well as empirical metabolic chamber data for a range of fur properties, animal sizes, and environmental parameters. Niche Mapper predicted pikas would be behaviorally constrained because of the need to thermoregulate during the hottest times of the day. We also showed that pikas at low elevations could receive energetic benefits by being smaller in size and maintaining summer pelage during longer stretches of the active season under a future warming scenario. We observed pika behavior for 288 h in Glacier National Park, Montana, and thermally characterized their rocky, montane environment. We found that pikas were most active when temperatures were cooler, and at sites characterized by high elevations and north-facing slopes. Pikas became significantly less active across a suite of behaviors in the field when temperatures surpassed 20°C, which supported a metabolic threshold predicted by Niche Mapper. In general, mechanistic predictions and empirical observations were congruent. This research is unique in providing both an empirical and mechanistic description of the effects of temperature on a mammalian sentinel of climate change, the American pika. Our results suggest that previously underinvestigated characteristics, specifically fur properties and body size, may play critical roles in pika populations' response to climate change. We also demonstrate the potential importance of considering behavioral thermoregulation and microclimate variability when predicting animal responses to climate change.
Marlowe, Hannah; McEntaffer, Randall L; Tutt, James H; DeRoo, Casey T; Miles, Drew M; Goray, Leonid I; Soltwisch, Victor; Scholze, Frank; Herrero, Analia Fernandez; Laubis, Christian
2016-07-20
Off-plane reflection gratings were previously predicted to have different efficiencies when the incident light is polarized in the transverse-magnetic (TM) versus transverse-electric (TE) orientations with respect to the grating grooves. However, more recent theoretical calculations which rigorously account for finitely conducting, rather than perfectly conducting, grating materials no longer predict significant polarization sensitivity. We present the first empirical results for radially ruled, laminar groove profile gratings in the off-plane mount, which demonstrate no difference in TM versus TE efficiency across our entire 300-1500 eV bandpass. These measurements together with the recent theoretical results confirm that grazing incidence off-plane reflection gratings using real, not perfectly conducting, materials are not polarization sensitive.
Empirical calibration of the near-infrared Ca II triplet - III. Fitting functions
NASA Astrophysics Data System (ADS)
Cenarro, A. J.; Gorgas, J.; Cardiel, N.; Vazdekis, A.; Peletier, R. F.
2002-02-01
Using a near-infrared stellar library of 706 stars with a wide coverage of atmospheric parameters, we study the behaviour of the CaII triplet strength in terms of effective temperature, surface gravity and metallicity. Empirical fitting functions for recently defined line-strength indices, namely CaT*, CaT and PaT, are provided. These functions can be easily implemented into stellar population models to provide accurate predictions for integrated CaII strengths. We also present a thorough study of the various error sources and their relation to the residuals of the derived fitting functions. Finally, the derived functional forms and the behaviour of the predicted CaII are compared with those of previous works in the field.
Big data prediction of durations for online collective actions based on peak's timing
NASA Astrophysics Data System (ADS)
Nie, Shizhao; Wang, Zheng; Pujia, Wangmo; Nie, Yuan; Lu, Peng
2018-02-01
Peak Model states that each collective action has a life circle, which contains four periods of "prepare", "outbreak", "peak", and "vanish"; and the peak determines the max energy and the whole process. The peak model's re-simulation indicates that there seems to be a stable ratio between the peak's timing (TP) and the total span (T) or duration of collective actions, which needs further validations through empirical data of collective actions. Therefore, the daily big data of online collective actions is applied to validate the model; and the key is to check the ratio between peak's timing and the total span. The big data is obtained from online data recording & mining of websites. It is verified by the empirical big data that there is a stable ratio between TP and T; furthermore, it seems to be normally distributed. This rule holds for both the general cases and the sub-types of collective actions. Given the distribution of the ratio, estimated probability density function can be obtained, and therefore the span can be predicted via the peak's timing. Under the scenario of big data, the instant span (how long the collective action lasts or when it ends) will be monitored and predicted in real-time. With denser data (Big Data), the estimation of the ratio's distribution gets more robust, and the prediction of collective actions' spans or durations will be more accurate.
Alma, Andrea Marina; Farji-Brener, Alejandro G; Elizalde, Luciana
2017-09-01
Empirical data about food size carried by central-place foragers do not often fit with the optimum predicted by classical foraging theory. Traditionally, biotic constraints such as predation risk and competition have been proposed to explain this inconsistency, leaving aside the possible role of abiotic factors. Here we documented how wind affects the load size of a central-place forager (leaf-cutting ants) through a mathematical model including the whole foraging process. The model showed that as wind speed at ground level increased from 0 to 2 km/h, load size decreased from 91 to 30 mm 2 , a prediction that agreed with empirical data from windy zones, highlighting the relevance of considering abiotic factors to predict foraging behavior. Furthermore, wind reduced the range of load sizes that workers should select to maintain a similar rate of food intake and decreased the foraging rate by ∼70% when wind speed increased 1 km/h. These results suggest that wind could reduce the fitness of colonies and limit the geographic distribution of leaf-cutting ants. The developed model offers a complementary explanation for why load size in central-place foragers may not fit theoretical predictions and could serve as a basis to study the effects of other abiotic factors that influence foraging.
Fundamental Algorithms of the Goddard Battery Model
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1985-01-01
The Goddard Space Flight Center (GSFC) is currently producing a computer model to predict Nickel Cadmium (NiCd) performance in a Low Earth Orbit (LEO) cycling regime. The model proper is currently still in development, but the inherent, fundamental algorithms (or methodologies) of the model are defined. At present, the model is closely dependent on empirical data and the data base currently used is of questionable accuracy. Even so, very good correlations have been determined between model predictions and actual cycling data. A more accurate and encompassing data base has been generated to serve dual functions: show the limitations of the current data base, and be inbred in the model properly for more accurate predictions. The fundamental algorithms of the model, and the present data base and its limitations, are described and a brief preliminary analysis of the new data base and its verification of the model's methodology are presented.
NASA Astrophysics Data System (ADS)
Oikawa, P. Y.; Jenerette, G. D.; Knox, S. H.; Sturtevant, C.; Verfaillie, J.; Dronova, I.; Poindexter, C. M.; Eichelmann, E.; Baldocchi, D. D.
2017-01-01
Wetlands and flooded peatlands can sequester large amounts of carbon (C) and have high greenhouse gas mitigation potential. There is growing interest in financing wetland restoration using C markets; however, this requires careful accounting of both CO2 and CH4 exchange at the ecosystem scale. Here we present a new model, the PEPRMT model (Peatland Ecosystem Photosynthesis Respiration and Methane Transport), which consists of a hierarchy of biogeochemical models designed to estimate CO2 and CH4 exchange in restored managed wetlands. Empirical models using temperature and/or photosynthesis to predict respiration and CH4 production were contrasted with a more process-based model that simulated substrate-limited respiration and CH4 production using multiple carbon pools. Models were parameterized by using a model-data fusion approach with multiple years of eddy covariance data collected in a recently restored wetland and a mature restored wetland. A third recently restored wetland site was used for model validation. During model validation, the process-based model explained 70% of the variance in net ecosystem exchange of CO2 (NEE) and 50% of the variance in CH4 exchange. Not accounting for high respiration following restoration led to empirical models overestimating annual NEE by 33-51%. By employing a model-data fusion approach we provide rigorous estimates of uncertainty in model predictions, accounting for uncertainty in data, model parameters, and model structure. The PEPRMT model is a valuable tool for understanding carbon cycling in restored wetlands and for application in carbon market-funded wetland restoration, thereby advancing opportunity to counteract the vast degradation of wetlands and flooded peatlands.
Search PNNL Home About Research Publications Jobs News Contacts Computational Biology and Bioinformatics , and engineering to transform the data into knowledge. This new quantitative, predictive biology is to empirical modeling and physics-based simulations. CBB research seeks to: Understand. Understanding
DOT National Transportation Integrated Search
2007-08-01
The objective of this research study was to develop performance characteristics or variables (e.g., ride quality, rutting, : fatigue cracking, transverse cracking) of flexible pavements in Montana, and to use these characteristics in the : implementa...
NASA Technical Reports Server (NTRS)
Gentz, Steven J.; Ordway, David O; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This report documents the outcome of the assessment.
Predicting the enthalpies of melting and vaporization for pure components
NASA Astrophysics Data System (ADS)
Esina, Z. N.; Korchuganova, M. R.
2014-12-01
A mathematical model of the melting and vaporization enthalpies of organic components based on the theory of thermodynamic similarity is proposed. In this empirical model, the phase transition enthalpy for the homological series of n-alkanes, carboxylic acids, n-alcohols, glycols, and glycol ethers is presented as a function of the molecular mass, the number of carbon atoms in a molecule, and the normal transition temperature. The model also uses a critical or triple point temperature. It is shown that the results from predicting the melting and vaporization enthalpies enable the calculation of binary phase diagrams.
NASA Technical Reports Server (NTRS)
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (9 inches from the source) dominated by direct wave propagation, mid-field environment (approximately 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Roque, Carlos; Cardoso, João Lourenço
2014-02-01
Crash prediction models play a major role in highway safety analysis. These models can be used for various purposes, such as predicting the number of road crashes or establishing relationships between these crashes and different covariates. However, the appropriate choice for the functional form of these models is generally not discussed in research literature on road safety. In case of run-off-the-road crashes, empirical evidence and logical considerations lead to conclusion that the relationship between expected frequency and traffic flow is not monotonously increasing. Copyright © 2013 Elsevier Ltd. All rights reserved.
Field investigation of the drift shadow
Su, G.W.; Kneafsey, T.J.; Ghezzehei, T.A.; Cook, P.J.; Marshall, B.D.
2006-01-01
The "Drift Shadow" is defined as the relatively drier region that forms below subsurface cavities or drifts in unsaturated rock. Its existence has been predicted through analytical and numerical models of unsaturated flow. However, these theoretical predictions have not been demonstrated empirically to date. In this project we plan to test the drift shadow concept through field investigations and compare our observations to simulations. Based on modeling studies we have an identified a suitable site to perform the study at an inactive mine in a sandstone formation. Pretest modeling studies and preliminary characterization of the site are being used to develop the field scale tests.
NASA Technical Reports Server (NTRS)
Gentz, Steven J.; Ordway, David O.; Parsons, David S.; Garrison, Craig M.; Rodgers, C. Steven; Collins, Brian W.
2015-01-01
The NASA Engineering and Safety Center (NESC) received a request to develop an analysis model based on both frequency response and wave propagation analyses for predicting shock response spectrum (SRS) on composite materials subjected to pyroshock loading. The model would account for near-field environment (approx. 9 inches from the source) dominated by direct wave propagation, mid-field environment (approx. 2 feet from the source) characterized by wave propagation and structural resonances, and far-field environment dominated by lower frequency bending waves in the structure. This document contains appendices to the Volume I report.
Cascading Walks Model for Human Mobility Patterns
Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong
2015-01-01
Background Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. Methodology/Principal Findings In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Conclusions/Significance Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns. PMID:25860140
Cascading walks model for human mobility patterns.
Han, Xiao-Pu; Wang, Xiang-Wen; Yan, Xiao-Yong; Wang, Bing-Hong
2015-01-01
Uncovering the mechanism behind the scaling laws and series of anomalies in human trajectories is of fundamental significance in understanding many spatio-temporal phenomena. Recently, several models, e.g. the explorations-returns model (Song et al., 2010) and the radiation model for intercity travels (Simini et al., 2012), have been proposed to study the origin of these anomalies and the prediction of human movements. However, an agent-based model that could reproduce most of empirical observations without priori is still lacking. In this paper, considering the empirical findings on the correlations of move-lengths and staying time in human trips, we propose a simple model which is mainly based on the cascading processes to capture the human mobility patterns. In this model, each long-range movement activates series of shorter movements that are organized by the law of localized explorations and preferential returns in prescribed region. Based on the numerical simulations and analytical studies, we show more than five statistical characters that are well consistent with the empirical observations, including several types of scaling anomalies and the ultraslow diffusion properties, implying the cascading processes associated with the localized exploration and preferential returns are indeed a key in the understanding of human mobility activities. Moreover, the model shows both of the diverse individual mobility and aggregated scaling displacements, bridging the micro and macro patterns in human mobility. In summary, our model successfully explains most of empirical findings and provides deeper understandings on the emergence of human mobility patterns.
NASA Astrophysics Data System (ADS)
Houde, Jean-Francois
In the first essay of this dissertation, I study an empirical model of spatial competition. The main feature of my approach is to formally specify commuting paths as the "locations" of consumers in a Hotelling-type model of spatial competition. The main consequence of this location assumption is that the substitution patterns between stations depend in an intuitive way on the structure of the road network and the direction of traffic flows. The demand-side of the model is estimated by combining a model of traffic allocation with econometric techniques used to estimate models of demand for differentiated products (Berry, Levinsohn and Pakes (1995)). The estimated parameters are then used to evaluate the importance of commuting patterns in explaining the distribution of gasoline sales, and compare the economic predictions of the model with the standard home-location model. In the second and third essays, I examine empirically the effect of a price floor regulation on the dynamic and static equilibrium outcomes of the gasoline retail industry. In particular, in the second essay I study empirically the dynamic entry and exit decisions of gasoline stations, and measure the impact of a price floor on the continuation values of staying in the industry. In the third essay, I develop and estimate a static model of quantity competition subject to a price floor regulation. Both models are estimated using a rich panel dataset on the Quebec gasoline retail market before and after the implementation of a price floor regulation.
NASA Astrophysics Data System (ADS)
Moon, Joon-Young; Kim, Junhyeok; Ko, Tae-Wook; Kim, Minkyung; Iturria-Medina, Yasser; Choi, Jee-Hyun; Lee, Joseph; Mashour, George A.; Lee, Uncheol
2017-04-01
Identifying how spatially distributed information becomes integrated in the brain is essential to understanding higher cognitive functions. Previous computational and empirical studies suggest a significant influence of brain network structure on brain network function. However, there have been few analytical approaches to explain the role of network structure in shaping regional activities and directionality patterns. In this study, analytical methods are applied to a coupled oscillator model implemented in inhomogeneous networks. We first derive a mathematical principle that explains the emergence of directionality from the underlying brain network structure. We then apply the analytical methods to the anatomical brain networks of human, macaque, and mouse, successfully predicting simulation and empirical electroencephalographic data. The results demonstrate that the global directionality patterns in resting state brain networks can be predicted solely by their unique network structures. This study forms a foundation for a more comprehensive understanding of how neural information is directed and integrated in complex brain networks.
Kail, Jochem; Guse, Björn; Radinger, Johannes; Schröder, Maria; Kiesel, Jens; Kleinhans, Maarten; Schuurman, Filip; Fohrer, Nicola; Hering, Daniel; Wolter, Christian
2015-01-01
River biota are affected by global reach-scale pressures, but most approaches for predicting biota of rivers focus on river reach or segment scale processes and habitats. Moreover, these approaches do not consider long-term morphological changes that affect habitat conditions. In this study, a modelling framework was further developed and tested to assess the effect of pressures at different spatial scales on reach-scale habitat conditions and biota. Ecohydrological and 1D hydrodynamic models were used to predict discharge and water quality at the catchment scale and the resulting water level at the downstream end of a study reach. Long-term reach morphology was modelled using empirical regime equations, meander migration and 2D morphodynamic models. The respective flow and substrate conditions in the study reach were predicted using a 2D hydrodynamic model, and the suitability of these habitats was assessed with novel habitat models. In addition, dispersal models for fish and macroinvertebrates were developed to assess the re-colonization potential and to finally compare habitat suitability and the availability / ability of species to colonize these habitats. Applicability was tested and model performance was assessed by comparing observed and predicted conditions in the lowland Treene River in northern Germany. Technically, it was possible to link the different models, but future applications would benefit from the development of open source software for all modelling steps to enable fully automated model runs. Future research needs concern the physical modelling of long-term morphodynamics, feedback of biota (e.g., macrophytes) on abiotic habitat conditions, species interactions, and empirical data on the hydraulic habitat suitability and dispersal abilities of macroinvertebrates. The modelling framework is flexible and allows for including additional models and investigating different research and management questions, e.g., in climate impact research as well as river restoration and management. PMID:26114430
Optimizing Blasting’s Air Overpressure Prediction Model using Swarm Intelligence
NASA Astrophysics Data System (ADS)
Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd
2018-04-01
Air overpressure (AOp) resulting from blasting can cause damage and nuisance to nearby civilians. Thus, it is important to be able to predict AOp accurately. In this study, 8 different Artificial Neural Network (ANN) were developed for the purpose of prediction of AOp. The ANN models were trained using different variants of Particle Swarm Optimization (PSO) algorithm. AOp predictions were also made using an empirical equation, as suggested by United States Bureau of Mines (USBM), to serve as a benchmark. In order to develop the models, 76 blasting operations in Hulu Langat were investigated. All the ANN models were found to outperform the USBM equation in three performance metrics; root mean square error (RMSE), mean absolute percentage error (MAPE) and coefficient of determination (R2). Using a performance ranking method, MSO-Rand-Mut was determined to be the best prediction model for AOp with a performance metric of RMSE=2.18, MAPE=1.73% and R2=0.97. The result shows that ANN models trained using PSO are capable of predicting AOp with great accuracy.
Modeling of Fume Formation from Shielded Metal Arc Welding Process
NASA Astrophysics Data System (ADS)
Sivapirakasam, S. P.; Mohan, Sreejith; Santhosh Kumar, M. C.; Surianarayanan, M.
2017-04-01
In this study, a semi-empirical model of fume formation rate (FFR) from a shielded metal arc welding (SMAW) process has been developed. The model was developed for a DC electrode positive (DCEP) operation and involves the calculations of droplet temperature, surface area of the droplet, and partial vapor pressures of the constituents of the droplet to predict the FFR. The model was further extended for predicting FFR from nano-coated electrodes. The model estimates the FFR for Fe and Mn assuming constant proportion of other elements in the electrode. Fe FFR was overestimated, while Mn FFR was underestimated. The contribution of spatters and other mechanism in the arc responsible for fume formation were neglected. A good positive correlation was obtained between the predicted and experimental FFR values which highlighted the usefulness of the model.
Barwich, Ann-Sophie
2018-06-01
Empirical success is a central criterion for scientific decision-making. Yet its understanding in philosophical studies of science deserves renewed attention: Should philosophers think differently about the advancement of science when they deal with the uncertainty of outcome in ongoing research in comparison with historical episodes? This paper argues that normative appeals to empirical success in the evaluation of competing scientific explanations can result in unreliable conclusions, especially when we are looking at the changeability of direction in unsettled investigations. The challenges we encounter arise from the inherent dynamics of disciplinary and experimental objectives in research practice. In this paper we discuss how these dynamics inform the evaluation of empirical success by analyzing three of its requirements: data accommodation, instrumental reliability, and predictive power. We conclude that the assessment of empirical success in developing inquiry is set against the background of a model's interactive success and prospective value in an experimental context. Our argument is exemplified by the analysis of an apparent controversy surrounding the model of a quantum nose in research on olfaction. Notably, the public narrative of this controversy rests on a distorted perspective on measures of empirical success. Copyright © 2018 The Author. Published by Elsevier Ltd.. All rights reserved.
Predicting language diversity with complex networks
Gubiec, Tomasz
2018-01-01
We analyze the model of social interactions with coevolution of the topology and states of the nodes. This model can be interpreted as a model of language change. We propose different rewiring mechanisms and perform numerical simulations for each. Obtained results are compared with the empirical data gathered from two online databases and anthropological study of Solomon Islands. We study the behavior of the number of languages for different system sizes and we find that only local rewiring, i.e. triadic closure, is capable of reproducing results for the empirical data in a qualitative manner. Furthermore, we cancel the contradiction between previous models and the Solomon Islands case. Our results demonstrate the importance of the topology of the network, and the rewiring mechanism in the process of language change. PMID:29702699
NASA Astrophysics Data System (ADS)
Yin, Yip Chee; Hock-Eam, Lim
2012-09-01
Our empirical results show that we can predict GDP growth rate more accurately in continent with fewer large economies, compared to smaller economies like Malaysia. This difficulty is very likely positively correlated with subsidy or social security policies. The stage of economic development and level of competiveness also appears to have interactive effects on this forecast stability. These results are generally independent of the forecasting procedures. Countries with high stability in their economic growth, forecasting by model selection is better than model averaging. Overall forecast weight averaging (FWA) is a better forecasting procedure in most countries. FWA also outperforms simple model averaging (SMA) and has the same forecasting ability as Bayesian model averaging (BMA) in almost all countries.
NASA Astrophysics Data System (ADS)
Neumann, D. W.; Zagona, E. A.; Rajagopalan, B.
2005-12-01
Warm summer stream temperatures due to low flows and high air temperatures are a critical water quality problem in many western U.S. river basins because they impact threatened fish species' habitat. Releases from storage reservoirs and river diversions are typically driven by human demands such as irrigation, municipal and industrial uses and hydropower production. Historically, fish needs have not been formally incorporated in the operating procedures, which do not supply adequate flows for fish in the warmest, driest periods. One way to address this problem is for local and federal organizations to purchase water rights to be used to increase flows, hence decrease temperatures. A statistical model-predictive technique for efficient and effective use of a limited supply of fish water has been developed and incorporated in a Decision Support System (DSS) that can be used in an operations mode to effectively use water acquired to mitigate warm stream temperatures. The DSS is a rule-based system that uses the empirical, statistical predictive model to predict maximum daily stream temperatures based on flows that meet the non-fish operating criteria, and to compute reservoir releases of allocated fish water when predicted temperatures exceed fish habitat temperature targets with a user specified confidence of the temperature predictions. The empirical model is developed using a step-wise linear regression procedure to select significant predictors, and includes the computation of a prediction confidence interval to quantify the uncertainty of the prediction. The DSS also includes a strategy for managing a limited amount of water throughout the season based on degree-days in which temperatures are allowed to exceed the preferred targets for a limited number of days that can be tolerated by the fish. The DSS is demonstrated by an example application to the Truckee River near Reno, Nevada using historical flows from 1988 through 1994. In this case, the statistical model predicts maximum daily Truckee River stream temperatures in June, July, and August using predicted maximum daily air temperature and modeled average daily flow. The empirical relationship was created using a step-wise linear regression selection process using 1993 and 1994 data. The adjusted R2 value for this relationship is 0.91. The model is validated using historic data and demonstrated in a predictive mode with a prediction confidence interval to quantify the uncertainty. Results indicate that the DSS could substantially reduce the number of target temperature violations, i.e., stream temperatures exceeding the target temperature levels detrimental to fish habitat. The results show that large volumes of water are necessary to meet a temperature target with a high degree of certainty and violations may still occur if all of the stored water is depleted. A lower degree of certainty requires less water but there is a higher probability that the temperature targets will be exceeded. Addition of the rules that consider degree-days resulted in a reduction of the number of temperature violations without increasing the amount of water used. This work is described in detail in publications referenced in the URL below.
Using Predictability for Lexical Segmentation.
Çöltekin, Çağrı
2017-09-01
This study investigates a strategy based on predictability of consecutive sub-lexical units in learning to segment a continuous speech stream into lexical units using computational modeling and simulations. Lexical segmentation is one of the early challenges during language acquisition, and it has been studied extensively through psycholinguistic experiments as well as computational methods. However, despite strong empirical evidence, the explicit use of predictability of basic sub-lexical units in models of segmentation is underexplored. This paper presents an incremental computational model of lexical segmentation for exploring the usefulness of predictability for lexical segmentation. We show that the predictability cue is a strong cue for segmentation. Contrary to earlier reports in the literature, the strategy yields state-of-the-art segmentation performance with an incremental computational model that uses only this particular cue in a cognitively plausible setting. The paper also reports an in-depth analysis of the model, investigating the conditions affecting the usefulness of the strategy. Copyright © 2016 Cognitive Science Society, Inc.
Harlow C. Landphair
1979-01-01
This paper relates the evolution of an empirical model used to predict public response to scenic quality objectively. The text relates the methods used to develop the visual quality index model, explains the terms used in the equation and briefly illustrates how the model is applied and how it is tested. While the technical application of the model relies heavily on...
Multi-scale predictions of massive conifer mortality due to chronic temperature rise
NASA Astrophysics Data System (ADS)
McDowell, N. G.; Williams, A. P.; Xu, C.; Pockman, W. T.; Dickman, L. T.; Sevanto, S.; Pangle, R.; Limousin, J.; Plaut, J.; Mackay, D. S.; Ogee, J.; Domec, J. C.; Allen, C. D.; Fisher, R. A.; Jiang, X.; Muss, J. D.; Breshears, D. D.; Rauscher, S. A.; Koven, C.
2016-03-01
Global temperature rise and extremes accompanying drought threaten forests and their associated climatic feedbacks. Our ability to accurately simulate drought-induced forest impacts remains highly uncertain in part owing to our failure to integrate physiological measurements, regional-scale models, and dynamic global vegetation models (DGVMs). Here we show consistent predictions of widespread mortality of needleleaf evergreen trees (NET) within Southwest USA by 2100 using state-of-the-art models evaluated against empirical data sets. Experimentally, dominant Southwest USA NET species died when they fell below predawn water potential (Ψpd) thresholds (April-August mean) beyond which photosynthesis, hydraulic and stomatal conductance, and carbohydrate availability approached zero. The evaluated regional models accurately predicted NET Ψpd, and 91% of predictions (10 out of 11) exceeded mortality thresholds within the twenty-first century due to temperature rise. The independent DGVMs predicted >=50% loss of Northern Hemisphere NET by 2100, consistent with the NET findings for Southwest USA. Notably, the global models underestimated future mortality within Southwest USA, highlighting that predictions of future mortality within global models may be underestimates. Taken together, the validated regional predictions and the global simulations predict widespread conifer loss in coming decades under projected global warming.
Multi-scale predictions of massive conifer mortality due to chronic temperature rise
McDowell, Nathan G.; Williams, A.P.; Xu, C.; Pockman, W. T.; Dickman, L. T.; Sevanto, Sanna; Pangle, R.; Limousin, J.; Plaut, J.J.; Mackay, D.S.; Ogee, J.; Domec, Jean-Christophe; Allen, Craig D.; Fisher, Rosie A.; Jiang, X.; Muss, J.D.; Breshears, D.D.; Rauscher, Sara A.; Koven, C.
2016-01-01
Global temperature rise and extremes accompanying drought threaten forests and their associated climatic feedbacks. Our ability to accurately simulate drought-induced forest impacts remains highly uncertain in part owing to our failure to integrate physiological measurements, regional-scale models, and dynamic global vegetation models (DGVMs). Here we show consistent predictions of widespread mortality of needleleaf evergreen trees (NET) within Southwest USA by 2100 using state-of-the-art models evaluated against empirical data sets. Experimentally, dominant Southwest USA NET species died when they fell below predawn water potential (Ψpd) thresholds (April–August mean) beyond which photosynthesis, hydraulic and stomatal conductance, and carbohydrate availability approached zero. The evaluated regional models accurately predicted NET Ψpd, and 91% of predictions (10 out of 11) exceeded mortality thresholds within the twenty-first century due to temperature rise. The independent DGVMs predicted ≥50% loss of Northern Hemisphere NET by 2100, consistent with the NET findings for Southwest USA. Notably, the global models underestimated future mortality within Southwest USA, highlighting that predictions of future mortality within global models may be underestimates. Taken together, the validated regional predictions and the global simulations predict widespread conifer loss in coming decades under projected global warming.
NASA Astrophysics Data System (ADS)
Ecker, Madeleine; Gerschler, Jochen B.; Vogel, Jan; Käbitz, Stefan; Hust, Friedrich; Dechent, Philipp; Sauer, Dirk Uwe
2012-10-01
Battery lifetime prognosis is a key requirement for successful market introduction of electric and hybrid vehicles. This work aims at the development of a lifetime prediction approach based on an aging model for lithium-ion batteries. A multivariable analysis of a detailed series of accelerated lifetime experiments representing typical operating conditions in hybrid electric vehicle is presented. The impact of temperature and state of charge on impedance rise and capacity loss is quantified. The investigations are based on a high-power NMC/graphite lithium-ion battery with good cycle lifetime. The resulting mathematical functions are physically motivated by the occurring aging effects and are used for the parameterization of a semi-empirical aging model. An impedance-based electric-thermal model is coupled to the aging model to simulate the dynamic interaction between aging of the battery and the thermal as well as electric behavior. Based on these models different drive cycles and management strategies can be analyzed with regard to their impact on lifetime. It is an important tool for vehicle designers and for the implementation of business models. A key contribution of the paper is the parameterization of the aging model by experimental data, while aging simulation in the literature usually lacks a robust empirical foundation.
Modeling listeners' emotional response to music.
Eerola, Tuomas
2012-10-01
An overview of the computational prediction of emotional responses to music is presented. Communication of emotions by music has received a great deal of attention during the last years and a large number of empirical studies have described the role of individual features (tempo, mode, articulation, timbre) in predicting the emotions suggested or invoked by the music. However, unlike the present work, relatively few studies have attempted to model continua of expressed emotions using a variety of musical features from audio-based representations in a correlation design. The construction of the computational model is divided into four separate phases, with a different focus for evaluation. These phases include the theoretical selection of relevant features, empirical assessment of feature validity, actual feature selection, and overall evaluation of the model. Existing research on music and emotions and extraction of musical features is reviewed in terms of these criteria. Examples drawn from recent studies of emotions within the context of film soundtracks are used to demonstrate each phase in the construction of the model. These models are able to explain the dominant part of the listeners' self-reports of the emotions expressed by music and the models show potential to generalize over different genres within Western music. Possible applications of the computational models of emotions are discussed. Copyright © 2012 Cognitive Science Society, Inc.